Security Strategy – phoenixNAP Blog https://devtest.phoenixnap.com/blog phoenixNAP Global IT Services Mon, 19 Oct 2020 20:22:16 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.12 Chaos Engineering: How it Works, Principles, Benefits, & Tools https://devtest.phoenixnap.com/blog/chaos-engineering Thu, 15 Oct 2020 21:19:38 +0000 https://devtest.phoenixnap.com/blog/?p=79293

Finding faults in a distributed system goes beyond the capability of standard application testing. Companies need smarter ways to test microservices continuously. One strategy that is gaining popularity is chaos engineering.

Using this proactive testing practice, an organization can look for and fix failures before they cause a costly outage. Read on to learn how chaos engineering improves the reliability of large-scale distributed systems.

What Is Chaos Engineering? Defined

Chaos engineering is a strategy for discovering vulnerabilities in a distributed system. This practice requires injecting failures and errors into software during production. Once you intentionally cause a bug, monitor the effects to see how the system responds to stress.

By “breaking things” on purpose, you discover new issues that could impact components and end-users. Address the identified weaknesses before they cause data loss or service impact.

Chaos engineering allows an admin to:

  • Identify weak points in a system.
  • See in real-time how a system responds to pressure.
  • Prepare the team for real failures.
  • Identify bugs that are yet to cause system-wide issues.

Netflix was the first organization to introduce chaos engineering. In 2010, the company released a tool called Chaos Monkey. With this tool, admins were able to cause failures in random places at random intervals. Such a testing approach made Netflix’s distributed cloud-based system much more resilient to faults.

Who Uses Chaos Engineering?

Many tech companies practice chaos engineering to improve the resilience of distributed systems. Netflix continues to pioneer the practice, but companies like Facebook, Google, Microsoft, and Amazon have similar testing models.

More traditional organizations have caught on to chaos testing too. For example, the National Australia Bank applied chaos to randomly shut down servers and build system resiliency.

The Need for Chaos Engineering

Peter Deutsch and his colleagues from Sun Microsystem listed eight false assumptions programmers commonly make about distributed systems:

  • The network is reliable.
  • There is zero latency.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology never changes.
  • There is one admin.
  • Transport cost is zero.
  • The network is homogeneous.

These fallacies show the dynamics of a distributed application designed in a microservices architecture. This kind of system has many moving parts, and admins have little control over the cloud infrastructure.

Constant changes to the setup cause unexpected system behavior. It is impossible to predict these behaviors, but we can reproduce and test them with chaos engineering.

Difference Between Chaos Engineering and Failure Testing

A failure test examines a single condition and determines whether a property is true or false. Such a test breaks a system in a preconceived way. The results are usually binary, and they do not uncover new information about the application.

The goal of a chaos test is to generate new knowledge about the system. Broader scope and unpredictable outcomes enable you to learn about the system’s behaviors, properties, and performance. You open new avenues for exploration and see how you can improve the system.

While different, chaos and failure testing do have some overlap in concerns and tools used. You get the best results when you use both disciplines to test an application.

Chaos experiments vs standard testing

How Chaos Engineering Works

All testing in chaos engineering happens through so-called chaos experiments. Each experiment starts by injecting a specific fault into a system, such as latency, CPU failure, or a network black hole. Admins then observe and compare what they think will occur to what actually happens.

An experiment typically involves two groups of engineers. The first group controls the failure injection, and the second group deals with the effects.

Here is a step-by-step flow of a chaos experiment:

Step 1: Creating a Hypothesis

Engineers analyze the system and choose what failure to cause. The core step of chaos engineering is to predict how the system will behave once it encounters a particular bug.

Engineers also need to determine critical metric thresholds before starting a test. Metrics typically come in two sets:

  • Key metrics: These are the primary metrics of the experiment. For example, you can measure the impact on latency, requests per second, or system resources.
  • Customer metrics: These are precautionary metrics that tell you if the test went too far. Examples of customer metrics are orders per minute, or stream starts per second. If a test begins impacting customer metrics, that is a sign for admins to stop experimenting.

In some tests, the two metrics can overlap.

Step 2: Fault Injection

Engineers add a specific failure to the system. Since there is no way to be sure how the application will behave, there is always a backup plan.

Most chaos engineering tools have a reverse option. That way, if something goes wrong, you can safely abort the test and return to a steady-state of the application.

Step 3: Measuring the Impact

Engineers monitor the system while the bug causes significant issues. Key metrics are the primary concern but always monitor the entire system.

If the test starts a simulated outage, the team looks for the best way to fix it.

Step 4: Verify (or Disprove) Your Hypothesis

A successful chaos test has one of two outcomes. You either verify the resilience of the system, or you find a problem you need to fix. Both are good outcomes.

How chaos engineering works

Principles of Chaos Engineering

While the name may suggest otherwise, there is nothing random in chaos engineering.

This testing method follows strict principles, which include the following principles:

Know the Normal State of Your System

Define the steady-state of your system. The usual behavior of a system is a reference point for any chaos experiment. By understanding the system when it is healthy, you will better understand the impact of bugs and failures.

Inject Realistic Bugs and Failures

All experiments should reflect realistic and likely scenarios. When you inject a real-life failure, you get a good sense of what processes and technologies need an upgrade.

Test in Production

You can only see how outages affect the system if you apply the test to a production environment.

If your team has little to no experience with chaos testing, let them start experimenting in a development environment. Test the production environment once ready.

Control the Blast Radius

Always minimize the blast radius of a chaos test. As these tests happen in a production environment, there is a chance that the test could affect end-users.

Another standard precaution is to have a team ready for actual incident response, just in case.

Continuous Chaos

You can automate chaos experiments to the same level as your CI/CD pipeline. Constant chaos allows your team to improve both current and future systems continuously.

Principles of chaos engineering

Benefits of Chaos Engineering

The benefits of chaos engineering span across several business fronts:

Business Benefits

Chaos engineering helps stop large losses in revenue by preventing lengthy outages. The practice also allows companies to scale quickly without losing the reliability of their services.

Technical Benefits

Insights from chaos experiments reduce incidents, but that is not where technical benefits end. The team gets an increased understanding of system modes and dependencies, allowing them to build a more robust system design.

A chaos test is also excellent on-call training for the engineering team.

Customer Benefits

Fewer outages mean less disruption for end-users. Improved service availability and durability are the two chief customer benefits of chaos engineering.

Chaos Engineering Tools

These are the most common chaos engineering tools:

  • Chaos Monkey: This is the original tool created at Netflix. While it came out in 2010, Chaos Monkey still gets regular updates and is the go-to chaos testing tool.
  • Gremlin: Gremlin helps clients set up and control chaos testing. The free version of the tool offers basic tests, such as turning off machines and simulating high CPU load.
  • Chaos Toolkit: This open-source initiative makes tests easier with an open API and a standard JSON format.
  • Pumba: Pumba is a chaos testing and network emulation tool for Docker.
  • Litmus: A chaos engineering tool for stateful workloads on Kubernetes.

To keep up with new tools, bookmark the diagram created by the Chaos Engineering Slack Community. Besides the tools, the chart also keeps track of known engineers working with chaos tests.

Chaos Engineering Examples

There are no limits to chaos experiments. The type of tests you run depends on the architecture of your distributed system and business goals.

Here is a list of the most common chaos tests:

  • Simulating the failure of a micro-component.
  • Turning a virtual machine off to see how a dependency reacts.
  • Simulating a high CPU load.
  • Disconnecting the system from the data center.
  • Injecting latency between services.
  • Randomly causing functions to throw exceptions (also known as function-based chaos).
  • Adding instructions to a program and allowing fault injection (also known as code insertion).
  • Disrupting syncs between system clocks.
  • Emulating I/O errors.
  • Causing sudden spikes in traffic.
  • Injecting byzantine failures.

Chaos Engineering and DevOps

Chaos engineering is a common practice within the DevOps culture. Such tests allow DevOps to thoroughly analyze applications while keeping up with the tempo of agile development.

DevOps teams commonly use chaos testing to define a functional baseline and tolerances for infrastructure. Tests also help create better policies and processes by clarifying both steady-state and chaotic outputs.

Some companies prefer to integrate chaos engineering into their software development life cycle. Integrated chaos allows companies to ensure the reliability of every new feature.

A Must for any Large-Scale Distributed System

Continuous examination of software is vital both for application security and functionality. By proactively examining a system, you can reduce the operational burden, increase system availability, and resilience.

]]>
How Kerberos Authentication Works https://devtest.phoenixnap.com/blog/kerberos-authentication Thu, 01 Oct 2020 19:11:29 +0000 https://devtest.phoenixnap.com/blog/?p=79090

In traditional computer systems, users prove their identities by typing in passwords. While easy to set up, this authentication method has a severe flaw. If hackers steal or crack the password, it is easy to take on the user’s identity. Intruders log in as the real user, and the system is wide open to an attack.

Kerberos authentication protects user credentials from hackers. This protocol keeps passwords away from insecure networks at all times, even during user verification.

Read on to learn what Kerberos authentication is and how it protects both end-users and systems.

What is Kerberos?

Kerberos is an authentication protocol for client/server applications. This protocol relies on a combination of private key encryption and access tickets to safely verify user identities.

The main reasons for adopting Kerberos are:

  • Plain text passwords are never sent across an insecure network.
  • Every login has three stages of authentication.
  • Encryption protects all access keys and tickets.
  • Authentication is mutual, so both users and providers are safe from scams.

MIT developed the first instances of Kerberos in the late ’80s. The protocol was named after Cerberus, a creature from Greek mythology. Cerberus was a ferocious three-headed dog who guarded Hades.

A refined version of Kerberos came out of Microsoft as part of Windows 2000. Since then, Kerberos became Windows’ default authorization protocol. Implementations of Kerberos also exist for Apple OS, FreeBSD, UNIX, and Linux. The Kerberos Consortium treats the protocol as an open-source project.

diagram about why you should use kerberos authentication

Three Main Components of Kerberos

Every Kerberos verification involves a Key Distribution Center (KDC). The KDC acts as a trusted third-party authentication service, and it operates from the Kerberos server. KDC consists of three main components:

  • An authentication server (AS): The AS performs initial authentication when a user wants to access a service.
  • A ticket granting server (TGS): This server connects a user with the service server (SS).
  • A Kerberos database: This database stores IDs and passwords of verified users.

All Kerberos authentications take place in Kerberos realms. A realm is a group of systems over which a KDC has the authority to verify users and services.

What is Kerberos authenticationHow Kerberos Authentication Works

With Kerberos, users never authenticate themselves to the service directly. Instead, they go through a series of steps performed by different parts of the Key Distribution Center.

The AS Verifies Users with Decryption

The Kerberos protocol starts with the user requesting access to a service through the Authentication Server. This request is partially encrypted with a secret key, the user’s password. The password is a shared secret between the user and the AS.

The AS can only decrypt the request if the user encrypted the message with the right password. If the password is wrong, the AS cannot interpret the request. In that case, AS does not verify the user, and the authentication process fails.

Once it decrypts the request, the AS creates a ticket-granting ticket (TGT) and encrypts it with the TGS’s secret key. This key is a shared secret between the AS and the Ticket Granting Server.

A TGT contains a client/TGS session key, an expiration date, and the client’s IP address. The IP address protects from man-in-the-middle attacks. Once it issues a TGT, the AS sends it to the user.

The TGS Connects Users to Service Servers

The user sends the TGT to the TGS. If the ticket is valid and the user has permission to access the service, the TGS issues a service ticket.

A service ticket contains the client ID, client network address, validity period, and client/server session key. The service ticket is encrypted with a secret key shared with the service server.

The user then sends the ticket to the service server along with the service request. The SS decrypts the key and grants access to the requested resources.

Verification Without Plain Text Passwords

During the entire verification process, a plain text password never reaches the KDC or the service server. Encryption protects all three sets of temporary private keys.

Kerberos works both with symmetric and asymmetric (public-key) cryptography. The protocol can also handle multi-factor authentication (MFA).

Remote work may expose vulnerabilities to potential attacks. Learn how to secure remote access to computer systems.

Kerberos Authentication Steps

Kerberos Authentication is a multi-step process. Let us say a user wishes to access a network file server to read a document. Below are the steps required to authenticate through Kerberos:

Step 1: The User Sends a Request to the AS

The user issues an encrypted request to the Authentication Server. When the AS gets the request, it searches for the password in the Kerberos database based on the user ID.

If the user typed in the correct password, the AS decrypts the request.

Step 2: The AS Issues a TGT

After verifying the user, the AS sends back a Ticket Granting Ticket.

Step 3: The User Sends a Request to the TGS

The user sends the TGT to the Ticket Granting Server. Along with the TGT, the user also explains the reason for accessing the file server.

The TGS decrypts the ticket with the secret key shared with the AS.

Step 4: TGS Issues a Service Ticket

If the TGT is valid, the TGS issues a service ticket to the user.

Step 5: The User Contacts the File Server with the Service Ticket

The client sends the service ticket to the file server. The file server decrypts the ticket with the secret key shared with TGS.

Step 6: The User Opens the Document

If the secret keys match, the file server allows the user to open the document. The service ticket determines how long the user has access to the record.

Once access expires, the user needs to go through the entire Kerberos authentication protocol again.

Kerberos Authentication Diagram

diagram showing how Kerberos authentication grants access to users

Benefits of Kerberos Authentication

These are the main benefits of adopting Kerberos:

Improved Security

Cryptography, multiple secret keys, and third-party authorization make Kerberos one of the industry’s most secure verification protocols.

User passwords are never sent across the network. Secret keys pass the system in encrypted form. If someone is logging conversations, it is hard to gather enough data to impersonate a user or the service.

Access Control

Kerberos is a crucial component of today’s enterprises. The protocol allows excellent access control.

With Kerberos, the company gets a single point for enforcing security policies and keeping track of logins.

Transparency and Auditability

Kerberos makes it easy to see who requested what and at what time. Transparent and precise logs are vital for security audits and investigations.

Mutual Authentication

Kerberos enables users and service systems to authenticate each other. At each step of the authentication process, both the user and the server systems know that they are interacting with authentic counterparts.

Limited Ticket Lifetime

All tickets in the Kerberos model have timestamps and lifetime data. Admins control the duration of the users’ authentication.

Short ticket lifetimes are great for preventing brute-force and replay attacks.

Scalability

Several technology giants have adopted Kerberos authentication, like Apple, Microsoft, and Sun. The adoption among enterprises speaks volumes about Kerberos’ ability to keep up with the demands of large companies.

Reusable Authentications

Kerberos authentications are reusable and durable. The user only verifies to the Kerberos system once. For the lifetime of the ticket, the user can authenticate to network services without re-entering personal data.

Single sign-on is the most direct end-user benefit of Kerberos.

Quick Fixes and Updates

Over the years, top programmers and security experts have tried to break Kerberos. This scrutiny ensures that any new weakness in the protocol is quickly analyzed and corrected.

Can Kerberos Be Hacked?

No security model is completely invulnerable, and Kerberos is no exception. As Kerberos is so widely used, hackers had ample opportunities to find ways around it.

The biggest threats to a Kerberos system are forged tickets, repeated attempts to guess a password and encryption downgrading malware. A combination of all three tactics is the usual recipe for successful breaches.

The most successful methods of hacking Kerberos include:

  • Pass-the-ticket: A cyber attacker forges a session key and presents the fake credentials to reach the resources. Hackers usually forge a golden ticket (a ticket that grants domain admin access) or a silver ticket (a ticket that grants access to a service).
  • Credential stuffing and brute-force attacks: Automated, continued attempts to guess a user password. Most brute-force attacks go after the initial ticketing and the ticket-granting service.
  • Skeleton key malware: This malware bypasses Kerberos and downgrades key encryption. The attacker must have admin access to launch the cyberattack.
  • DCShadow attack: This hack occurs when attackers gain enough access within the network to set up their own DC for further infiltration.

Despite these dangers, Kerberos remains the best security protocol available today. If users practice good password choice policies, the likelihood of hacks is minimal.

Weaknesses of Kerberos Authentication

While Kerberos effectively deals with security threats, the protocol does pose several challenges:

The Kerberos Server Is a Single Point of Failure

If the Kerberos server goes down, users cannot log in. Fallback authentication mechanisms and secondary servers are typical solutions to this problem.

Strict Time Requirements

Date/time configurations of the involved hosts must always be synchronized within predefined limits. Otherwise, authentications fail because tickets have a limited availability period.

Every Network Service Needs its Kerberos Keys

Each network service that requires a different hostname needs its set of Kerberos keys. Issues with virtual hosting and clusters are not uncommon.

All Nodes Must Be Compatible with Third-Party Authentication

Both user machines and service servers must be designed with Kerberos authentication in mind.

Some legacy systems and local packages are not compatible with third-party authentication mechanisms.

An Old, But By No Means an Outdated Protocol

Cybercrime is an unfortunate element of today’s fiber interconnectivity. Experts predict that the average cost of a data breach for large enterprises will be more than $150 million in 2020. Forbes predicts that an increasing number of criminals will soon be using Artificial Intelligence (AI) to scale and better their attacks.

The cybercrime problem is not going away anytime soon. Kerberos authentication is an excellent way for a company to protect its assets both now and as threats become more advanced.

]]>
What is Managed Detection and Response (MDR Security)? https://devtest.phoenixnap.com/blog/what-is-managed-detection-response-mdr-security Fri, 25 Sep 2020 19:26:02 +0000 https://devtest.phoenixnap.com/blog/?p=78958

Maintaining high levels of cybersecurity is expensive. To run security operations, companies must invest in skilled staff and set aside resources for the right tools and devices.

Managed Detection and Response (MDR) services are a cost-effective alternative to running an in-house security team.

This article provides everything you need to know about MDR security. Learn how Managed Detection and Response provides real-time protection without the expenses of a fully-staffed internal team.

What is Managed Detection and Response in Cybersecurity?

Managed Detection and Response (MDR) is an outsourced service that monitors a network for malicious activity. MDR offers proactive threat hunting to remove intrusions, data breaches, and malware before an attacker can strike.

It combines analytics and human expertise to detect and eliminate threats in the network. The standard scope of MDR security includes:

  • Threat detection: Constant monitoring of data and filtering alerts for analysis.
  • Threat analysis: Examining a potential threat to discover its origin, scope, and risk level.
  • Incident response: Notifying the client about the issue and removing the threat.

While less expensive than an internal team, MDR provides everything needed to keep a network secure:

  • 24/7 monitoring
  • Careful alert and incident analysis
  • Quick and efficient threat response
  • Threat hunting
  • Strong threat intelligence
  • Damage reduction from successful attacks and breaches

The service provider configures and provides the tools needed for MDR. Once set up, MDR tools analyze event logs and guard gateways to detect threats that evade typical security levels.

phoenixNAP implemented multiple security layers, including MDR, to design the world’s safest Cloud computing platform – Data Security Cloud.

Developed in collaboration with VMware and Intel, Data Security Cloud is a cloud infrastructure platform that leverages the latest MDR practices to ensure advanced data protection, vulnerability scanning, and endpoint protection.

While tools play a significant role, Managed Detection and Response primarily relies on humans for network monitoring. Tools filter event logs and detect potential Indicators of Compromise (IoC). Once a threat is recognized, human operators take over and remove the danger.

comparison of organizations with and without MDR

What is Threat Hunting in Cybersecurity?

Threat hunting in cybersecurity is a proactive approach to detecting, isolating, and removing threats. The main goal of threat hunting is to find malicious elements that evade automated security solutions.

Cyber threat hunting focuses on searching and eliminating threats before the attack occurs. This security measure does not involve addressing incidents that already took place.

Once malicious elements are located, threat hunters analyze the issue’s behavior and methods before neutralizing it. Threat hunting also involves identifying trends in attacks to prevent future breaches.

Threat hunting relies on human analysts. Tools speed up processes and repetitive tasks, but human operators make all crucial decisions.

MDR Is Growing in Popularity

When a company expands its IT system, there is a rise in network endpoints like laptops, desktops, and mobile devices. Each new endpoint creates a potential entry point for hackers.

Between constant monitoring and threat hunting, MDR is an excellent method of protecting endpoints. The ability to quickly secure entry points is why Managed Detection and Response is popular among enterprises. Large companies regularly add new devices to their systems, so defending endpoints is a big concern.

Enterprise Strategy Group (ESG) recently surveyed employees from mid-to-large enterprises to examine critical problems related to threat detection and responses.

Below are some exciting discoveries from the ESG research:

  • 77% of security experts said that managers are pressuring them to improve threat detection and response tactics.
  • According to 76% of companies, security analytics is more complex than two years ago.
  • 58% of businesses cited employee skills as the main problem for improving security.
  • Manual processes and alert fatigue are viewed as a critical issue by 70% of companies.

Add to those numbers the lack of capable staff on the market, and it becomes easy to see why there is an increase in demand for MDR.
mdr managed detection security diagram

What Problems Does MDR Solve?

Managed Detection and Response solves several common problems security teams face:

High Alert Volume

Too many alerts can overwhelm a small security team. Alert fatigue leads to inadequate monitoring, causes workers to neglect other tasks, and leaves a network open to an attack.

Managed Detection and Response helps handle the volume of alerts that need to be checked  individually. Once set up, MDR security does all the monitoring in the system, leaving the staff with ample time to focus on other duties.

Threat Analysis

It is hard to identify severe threats from alert noise. A malicious element may appear to be a random alert, while common errors can raise red flags across the system. To determine the cause, scope, and status of a problem, an IT team must analyze the situation.

By investing in MDR, a company secures advanced analytics tools and security experts capable of interpreting events in the network.

Advanced Attacks and Breaches

A poorly trained IT team can struggle when faced with an advanced threat.

MDR providers are staffed with security specialists capable of keeping up with cyberattacks. By investing in MDR security, you ensure the industry’s best talent monitors your networks and devices.

Endpoint Detection and Response (EDR)

Businesses often lack funds, time, or skills to train operators to use EDR tools properly. MDR services come with high-end EDR tools and the personnel who know how to use them. EDR tools are integrated into detection and response processes, removing the need for in-house endpoint protection.

Benefits of MDR Security

Standard tools for cybersecurity are good at stopping simple breaches and attacks. However, preventive tactics are not enough to secure an entire infrastructure.

MDR offers a thorough method of ensuring network safety. Instead of focusing solely on prevention, MDR goes after threats before they get an opportunity to cause damage.

Better Overall Approach to Security

Managed Detention and Response detects, analyzes, and stops threats, offering a comprehensive security solution.

When an MDR tool detects a problem, the team first verifies the validity of the threat. If the issue has a malicious cause, operators inform you about the situation and eliminate the threat.

Isolating the threat is another significant aspect of MDR. If a potential attack is spotted, the issue is contained within a single system. The threat is then unable to spread to other sectors of the network. That way, MDR reduces damage from successful breaches.

No False Alarms

When a standard security control runs into an alert, it sends unchecked alerts to operators. The process of separating false signals from real dangers wastes time and resources.

MDR performs an in-depth investigation of every suspicious activity in the network. Each threat is analyzed to check its status. Alerts that reach the security team require immediate action, so there are no pointless distractions.

Fast, Seamless Deployment

Setting up a custom detection and response system requires time. One would need to license software tools, set up the system, create procedures and security policies, and train the staff.

MDR solutions require little configuring and follow cybersecurity best practices.

Swift Detection of Threats

The quicker a threat is detected and dealt with, the easier and cheaper it is to remove it. Without MDR security, it takes an average of 280 days to identify and contain a breach.

Managed Detection and Response improves detection levels and reduces dwell time of breaches.

Easier Compliance

All major MDR providers ensure their defense procedures are compliant with regulatory bodies. Your MDR partner can help review processes and implement best practices.

Managed Detection and Response (MDR) vs. Managed Security Services Providers (MSSP)

While the two types of service share similarities, there are differences between MDR and MSSP regarding tools, expertise, and objectives.

Here is a comparison between what typical MDR and MSSP services include:

MDR vs. MSSP Security Services

Managed Detection and Response Managed Security Services Providers
24/7 threat detection

 

Yes Yes
Firewalls and other perimeter security infrastructure

 

Yes Yes
Proactive threat hunting

 

Yes Yes
Threat forensics 

 

Yes Yes
Responding to attacks

 

Yes Yes
Portals and dashboards are a primary line of communication

 

No Yes
An on-call team of experts

 

Yes Yes
Deep threat intelligence and analysis

 

Yes No
Use of AI and machine learning

 

Yes Yes
Integrated endpoint security

 

Yes No
Compliance Checks

 

No Yes
Vulnerability Hunting

 

No Yes

MDR security focuses on detecting and responding to potential malicious elements. MSSP is reactive and focuses on finding and eliminating vulnerabilities and compliance issues. Both types of service play a role in the modern IT landscape, and the better option entirely depends on the use case.

An MSSP system monitors network security controls and sends alerts when it detects an anomaly. It then forwards the report to the assigned IT staff, who inspects the data to analyze and remove any danger. In that regards, an MSSP secures infrastructure on more levels.

It is possible to use MSSP and MDR services at the same time. A company can rely on MSSP to run firewalls and other day-to-day operations. At the same time, MDR can detect and analyze advanced threats.

Does Artificial Intelligence (AI) Play a Role in MDR?

Applying AI to security problems is still in its early stages. Now, and for the foreseeable future, the only reliable security expert is a human operator.

Managed Detection and Response can leverage AI to speed up cyber defense algorithms. For example, advanced threat detection can rely on AI to filter through network events and identify unusual activities. An analyst then reviews to check whether the system ran into a security alert or false alarm.

AI-powered security tools also ensure fast incident-response times. An MDR provider uses AI and machine learning to investigate recurring events, auto-contain threats, and initiate reactions.

graphic representing mdr security systems

Managed Detection and Response for a Secure Cloud

An increasing number of businesses are opting for MDR services. The benefits are clear: threat control, better response times, less downtime, and lower costs of cyber protection.

PhoenixNAP offers the most well-rounded MDR solution on the market as a part of our Data Security Cloud offering. Accessible to both SMBs and enterprises alike, our Data Security Cloud ensures MDR protocols protect all your cloud activities. We provide the right talent and tools to proactively hunt and stop threats, ensuring you can focus on developing your business without distractions.

To learn how we can help you secure your workloads, contact one of our specialists today.

]]>
7 Network Segmentation Security Best Practices https://devtest.phoenixnap.com/blog/network-segmentation-security Wed, 26 Aug 2020 16:59:11 +0000 https://devtest.phoenixnap.com/blog/?p=78478

A network breach is an inevitable risk online. Attacks are bound to occur, and every network must be able to withstand a break-in. One cybersecurity mechanism stood out as very effective in limiting the damage in case of a network breach.

That cybersecurity technique is network segmentation.

Network segmentation can protect vital parts of the network during successful data breaches. If the system gets compromised, this defense mechanism limits the damage intruders can cause.

This article presents all the security benefits of network segmentation. You will learn how multi-layer protection can resist attacks and why segmentation is more reliable than flat network structures.

What is Network Segmentation?

Network segmentation is the process of dividing a network into smaller sections. These sections are created by placing barriers between parts of the system that don’t need to interact. For example, a company may create a subnet for its printers, or make a segment reserved for storing data.

Once you segment a network, every subnet functions as an independent system with unique access and security controls. Such network design allows you to control the flow of data traffic between sections. You can stop all traffic in one segment from reaching another. Additionally, network engineers use network segmentation to filter the data flow by traffic type, source, or purpose.

Isolating parts of a network limits a threat’s ability to move freely through the system. If a section of the network gets breached, other segments are not compromised.

A basic network segmentation diagram.

Types of Network Segmentation

Network engineers segment a network either physically or virtually. Let’s compare the two segmentation methods:

  • Physical segmentation: To physically segment a network, each subnet needs to have its wiring, connection, and a type of firewall. Physical segmentation offers reliable protection, but it can be hard to apply on a large system.
  • Virtual segmentation: This is the more common and affordable method of dividing a network. Different segments share the same firewalls, while switches manage the virtual local area network (VLAN).

Both segmentation methods have their pros and cons, but their effect is the same. You limit communication within the network and make it hard for a threat to attack more than one section.

 

Why Segment a Network?

Standard flat networks are simple to manage, but they do not offer reliable protection. Firewalls monitor all incoming traffic in a flat network architecture, and the focus is on stopping attacks from outside the system. If hackers pass the perimeter and enter the network, nothing prevents them from accessing databases and critical systems.

Segmentation eliminates the flaws of flat networks. Segmenting a network also leads to better performance due to less congestion. With fewer hosts per subnet, a segmented network minimizes local traffic and reduces the “noise” in broadcast traffic.

Another benefit of a network segmentation policy is that it helps ease the compliance requirements. You can split a network into zones that contain data with the same compliance rules. Separate zones decrease the scope of compliance and simplify security policies.

Flat networks vs Segmented networks

Security Advantages of Network Segmentation

Of all the types of network security, segmentation provides the most robust and effective protection.

Network segmentation security benefits include the following:

1. Strong Data Protection

The more you control the traffic in a network, the easier it is to protect essential data. Segmentation builds a wall around your data caches by limiting the number of network sections that access them.

Fewer segments with access to data mean fewer points of access for hackers to steal anything of value. Between limited access and local security protocols, you reduce the risk of data loss and theft.

2. Threat Containment

If hackers breach a segmented network, they are contained within a single subnet. It takes time to break into the rest of the system.

As hackers try to force their way into other subnets, admins have time to upgrade the security of other segments. Once they stop the problem from spreading, admins can turn their attention to the breached section.

PhoenixNAP uses network micro-segmentation and machine learning and behavioral analytics in the world’s most secure cloud platform – Data Security Cloud.

3. Limited Access Control

Segmentation protects from insider attacks by limiting user access to a single part of a network. This security measure is known as the Policy of Least Privilege. By ensuring only a select few can reach vital segments of the network, you limit the way hackers can enter critical systems.

The Policy of Least Privilege is vital as people are the weakest link in the network security chain. According to Verizon’s 2020 Data Breaches Report, over two-thirds of malware network breaches happen because of malicious emails.

A segmented network will keep intruders away from critical resources if a user’s credentials are stolen or abused.

4. Improved Monitoring and Threat Detection

Segmentation lets you add more points of network monitoring. More checks make it easier to spot suspicious behavior. Advanced monitoring also helps identify the root and scope of a problem.

Monitoring log events and internal connections enable admins to look for patterns in malicious activity. Knowing how attackers behave allows for a proactive approach to security and helps admins protect high-risk areas.

How network segmentation secures your data

5. Fast Response Rates

Distinct subnets allow admins to respond to events in the network quickly. When an attack or an error occurs, it is easy to see which segments are affected. These insights help to narrow the focus area of troubleshooting.

Quick responses to events in the network can also improve user experience. Unless a subnet dedicated to users has been breached, customers will not feel the impact of an issue in a segmented network.

6. Damage Control

Network segmentation minimizes the damage caused by a successful cybersecurity attack. By limiting how far an attack can spread, segmentation contains the breach in one subnet and ensures the rest of the network is safe.

Network errors are also contained within a single subnet. The effects of an issue are not felt in other segments, making the error easier to control and fix.

7. Protect Endpoint Devices

Due to constant flow control, segmentation keeps malicious traffic away from unprotected endpoint devices. This benefit of network segmentation is becoming crucial as IoT devices become commonplace.

The endpoint device is both a usual target and starting point of cyberattacks. A segmented network isolates these devices, limiting the risk of exposure for the entire system.

Protecting endpoint devices with network segmentation

Don’t Overlook The Importance of Network Segmentation

While network segmentation is not a novelty, it is by no means outdated. Between the cut in attack surface and traffic control, segmentation is among the best methods of stopping critical breaches.

Network segmentation is currently more a best practice model than a requirement. However, trends in industries with frequent cyberattacks — most notably the Payment Card Industry — suggest that segmentation could become a mandatory security measure. Whether that happens or not, the demand for subnets in some of the most regulated industries speaks volumes about the security benefits of network segmentation.

If you want to learn more about what it takes to secure a Cloud platform, read our Secure by Design Manifesto and discover how many security layers are necessary to design the most secure Cloud infrastructure in the world.

]]>
8 Types of Firewalls: Guide For IT Security Pros https://devtest.phoenixnap.com/blog/types-of-firewalls Thu, 09 Jul 2020 17:52:34 +0000 https://devtest.phoenixnap.com/blog/?p=77867

Are you searching for the right firewall setup to protect your business from potential threats?

Understanding how firewalls work helps you decide on the best solution. This article explains the types of firewalls, allowing you to make an educated choice.

What is a Firewall?

A firewall is a security device that monitors network traffic. It protects the internal network by filtering incoming and outgoing traffic based on a set of established rules. Setting up a firewall is the simplest way of adding a security layer between a system and malicious attacks.

How Does a Firewall Work?

A firewall is placed on the hardware or software level of a system to secure it from malicious traffic. Depending on the setup, it can protect a single machine or a whole network of computers. The device inspects incoming and outgoing traffic according to predefined rules.

Communicating over the Internet is conducted by requesting and transmitting data from a sender to a receiver. Since data cannot be sent as a whole, it is broken up into manageable data packets that make up the initially transmitted entity. The role of a firewall is to examine data packets traveling to and from the host.

What does a firewall inspect? Each data packet consists of a header (control information) and payload (the actual data). The header provides information about the sender and the receiver. Before the packet can enter the internal network through the defined port, it must pass through the firewall. This transfer depends on the information it carries and how it corresponds to the predefined rules.

diagram of how a firewall works

For example, the firewall can have a rule that excludes traffic coming from a specified IP address. If it receives data packets with that IP address in the header, the firewall denies access. Similarly, a firewall can deny access to anyone except the defined trusted sources. There are numerous ways to configure this security device. The extent to which it protects the system at hand depends on the type of firewall.

Types of Firewalls

Although they all serve to prevent unauthorized access, the operation methods and overall structure of firewalls can be quite diverse. According to their structure, there are three types of firewalls – software firewalls, hardware firewalls, or both. The remaining types of firewalls specified in this list are firewall techniques which can be set up as software or hardware.

Software Firewalls

A software firewall is installed on the host device. Accordingly, this type of firewall is also known as a Host Firewall. Since it is attached to a specific device, it has to utilize its resources to work. Therefore, it is inevitable for it to use up some of the system’s RAM and CPU.

If there are multiple devices, you need to install the software on each device. Since it needs to be compatible with the host, it requires individual configuration for each. Hence, the main disadvantage is the time and knowledge needed to administrate and manage firewalls for each device.

On the other hand, the advantage of software firewalls is that they can distinguish between programs while filtering incoming and outgoing traffic. Hence, they can deny access to one program while allowing access to another.

Hardware Firewalls

As the name suggests, hardware firewalls are security devices that represent a separate piece of hardware placed between an internal and external network (the Internet). This type is also known as an Appliance Firewall.

Unlike a software firewall, a hardware firewall has its resources and doesn’t consume any CPU or RAM from the host devices. It is a physical appliance that serves as a gateway for traffic passing to and from an internal network.

They are used by medium and large organizations that have multiple computers working inside the same network. Utilizing hardware firewalls in such cases is more practical than installing individual software on each device. Configuring and managing a hardware firewall requires knowledge and skill, so make sure there is a skilled team to take on this responsibility.

Packet-Filtering Firewalls

When it comes to types of firewalls based on their method of operation, the most basic type is the packet-filtering firewall. It serves as an inline security checkpoint attached to a router or switch. As the name suggests, it monitors network traffic by filtering incoming packets according to the information they carry.

As explained above, each data packet consists of a header and the data it transmits. This type of firewall decides whether a packet is allowed or denied access based on the header information. To do so, it inspects the protocol, source IP address, destination IP, source port, and destination port. Depending on how the numbers match the access control list (rules defining wanted/unwanted traffic), the packets are passed on or dropped.

Packet filtering firewall

If a data packet doesn’t match all the required rules, it won’t be allowed to reach the system.

A packet-filtering firewall is a fast solution that doesn’t require a lot of resources. However, it isn’t the safest. Although it inspects the header information, it doesn’t check the data (payload) itself. Because malware can also be found in this section of the data packet, the packet-filtering firewall is not the best option for strong system security.

***This table is scrollable horizontally.

PACKET-FILTERING FIREWALLS
Advantages Disadvantages Protection Level Who is it for:
– Fast and efficient for filtering headers.

– Don’t use up a lot of resources.

– Low cost.

– No payload check.

– Vulnerable to IP spoofing.

– Cannot filter application layer protocols.

– No user authentication.

– Not very secure as they don’t check the packet payload. – A cost-efficient solution to protect devices within an internal network.

– A means of isolating traffic internally between different departments.

Circuit-Level Gateways

Circuit-level gateways are a type of firewall that work at the session layer of the OSI model, observing TCP (Transmission Control Protocol) connections and sessions. Their primary function is to ensure the established connections are safe.

In most cases, circuit-level firewalls are built into some type of software or an already existing firewall.

Like pocket-filtering firewalls, they don’t inspect the actual data but rather the information about the transaction. Additionally, circuit-level gateways are practical, simple to set up, and don’t require a separate proxy server.

***This table is scrollable horizontally.

CIRCUIT-LEVEL GATEWAYS

Advantages

Disadvantages Protection Level

Who is it for:

– Resource and cost-efficient.

– Provide data hiding and protect against address exposure.

– Check TCP handshakes.

– No content filtering.

– No application layer security.

– Require software modifications.

– Moderate protection level (higher than packet filtering, but not completely efficient since there is no content filtering). – They should not be used as a stand-alone solution.

– They are often used with application-layer gateways.

Stateful Inspection Firewalls

A stateful inspection firewall keeps track of the state of a connection by monitoring the TCP 3-way handshake. This allows it to keep track of the entire connection – from start to end – permitting only expected return traffic inbound.

When starting a connection and requesting data, the stateful inspection builds a database (state table) and stores the connection information. In the state table, it notes the source IP, source port, destination IP, and destination port for each connection. Using the stateful inspection method, it dynamically creates firewall rules to allow anticipated traffic.

This type of firewall is used as additional security. It enforces more checks and is safer compared to stateless filters. However, unlike stateless/packet filtering, stateful firewalls inspect the actual data transmitted across multiple packets instead of just the headers. Because of this, they also require more system resources.

***This table is scrollable horizontally.

STATEFUL INSPECTION FIREWALLS

Advantages

Disadvantages Protection Level

Who is it for:

– Keep track of the entire session.

– Inspect headers and packet payloads.

– Offer more control.

– Operate with fewer open ports.

– Not as cost-effective as they require more resources.

– No authentication support.

– Vulnerable to DDoS attacks.

– May slow down performance due to high resource requirements.

– Provide more advanced security as it inspects entire data packets while blocking firewalls that exploit protocol vulnerabilities.

– Not efficient when it comes to exploiting stateless protocols.

–  Considered the standard network protection for cases that need a balance between packet filtering and application proxy.

Proxy Firewalls

A proxy firewall serves as an intermediate device between internal and external systems communicating over the Internet. It protects a network by forwarding requests from the original client and masking it as its own. Proxy means to serve as a substitute and, accordingly, that is the role it plays. It substitutes for the client that is sending the request.

When a client sends a request to access a web page, the message is intersected by the proxy server. The proxy forwards the message to the web server, pretending to be the client. Doing so hides the client’s identification and geolocation, protecting it from any restrictions and potential attacks. The web server then responds and gives the proxy the requested information, which is passed on to the client.

***This table is scrollable horizontally.

PROXY FIREWALLS

Advantages

Disadvantages Protection Level

Who is it for:

– Protect systems by preventing contact with other networks.

– Ensure user anonymity.

– Unlock geolocational restrictions.

– May reduce performance.

– Need additional configuration to ensure overall encryption.

– Not compatible with all network protocols.

– Offer good network protection if configured well. – Used for web applications to secure the server from malicious users.

– Utilized by users to ensure network anonymity and for bypassing online restrictions.

Next-Generation Firewalls

The next-generation firewall is a security device that combines a number of functions of other firewalls. It incorporates packet, stateful, and deep packet inspection. Simply put, NGFW checks the actual payload of the packet instead of focusing solely on header information.

Unlike traditional firewalls, the next-gen firewall inspects the entire transaction of data, including the TCP handshakes, surface-level, and deep packet inspection.

Using NGFW is adequate protection from malware attacks, external threats, and intrusion. These devices are quite flexible, and there is no clear-cut definition of the functionalities they offer. Therefore, make sure to explore what each specific option provides.

***This table is scrollable horizontally.

NEXT-GENERATION FIREWALLS

Advantages

Disadvantages Protection Level

Who is it for:

– Integrates deep inspection, antivirus, spam filtering, and application control.

– Automatic upgrades.

– Monitor network traffic from Layer 2 to Layer 7.

– Costly compared to other solutions.

– May require additional configuration to integrate with existing security management.

 

– Highly secure. – Suitable for businesses that require PCI or HIPAA compliance.

– For businesses that want a package deal security device.

Cloud Firewalls

A cloud firewall or firewall-as-a-service (Faas) is a cloud solution for network protection. Like other cloud solutions, it is maintained and run on the Internet by third-party vendors.

Clients often utilize cloud firewalls as proxy servers, but the configuration can vary according to the demand. Their main advantage is scalability. They are independent of physical resources, which allows scaling the firewall capacity according to the traffic load.

Businesses use this solution to protect an internal network or other cloud infrastructures (Iaas/Paas).

***This table is scrollable horizontally.

CLOUD FIREWALLS

Advantages

Disadvantages Protection Level

Who is it for:

– Availability.

– Scalability that offers increased bandwidth and new site protection.

– No hardware required.

– Cost-efficient in terms of managing and maintaining equipment.

– A wide range of prices depending on the services offered.

– The risk of losing control over security assets.

– Possible compatibility difficulties if migrating to a new cloud provider.

– Provide good protection in terms of high availability and having a professional staff taking care of the setup.

 

– A solution suitable for larger businesses that do not have an in-staff security team to maintain and manage the on-site security devices.

Which Firewall Architecture is Right for Your Business?

When deciding on which firewall to choose, there is no need to be explicit. Using more than one firewall type provides multiple layers of protection.

Also, consider the following factors:

  • The size of the organization. How big is the internal network? Can you manage a firewall on each device, or do you need a firewall that monitors the internal network? These questions are important to answer when deciding between software and hardware firewalls. Additionally, the decision between the two will largely depend on the capabilities of the tech team assigned to manage the setup.
  • The resources available. Can you afford to separate the firewall from the internal network by placing it on a separate piece of hardware or even on the cloud? The traffic load the firewall needs to filter and whether it is going to be consistent also plays an important role.
  • The level of protection required. The number and types of firewalls should reflect the security measures the internal network requires. A business dealing with sensitive client information should ensure that data is protected from hackers by tightening the firewall protection.

 

Build a firewall setup that fits the requirements considering these factors. Utilize the ability to layer more than one security device and configure the internal network to filter any traffic coming its way. For secure cloud options, see how phoenixNAP ensures cloud data security.

]]>
What is a Brute Force Attack? Types & Examples https://devtest.phoenixnap.com/blog/brute-force-attack Thu, 02 Jul 2020 14:53:47 +0000 https://devtest.phoenixnap.com/blog/?p=77227

Brute force attacks are alluring for hackers as they are often reliable and simple.

Hackers do not need to do much of the work. All they have to do is create an algorithm or use readily available brute force attack programs to automatically run different combinations of usernames and passwords until they find the right combination.  Such cyberattacks account for roughly 5 percent of all data breaches. According to statistics on data breaches, it only takes one data breach to create severe adverse implications for your business.

attacking an automated system in a brute force attack

What is a Brute Force Attack?

The phrase “brute force” describes the simplistic manner in which the attack takes place. Since the attack involves guessing credentials to gain unauthorized access, it’s easy to see where it gets its name. Primitive as they are, brute force attacks can be very effective.

The majority of cyberattackers who specialize in brute force attacks use bots to do their bidding. Attackers will generally have a list of real or commonly used credentials and assign their bots to attack websites using these credentials.

Manual brute force cracking is time-consuming, and most attackers use brute force attack software and tools to aid them. With the tools at their disposal, attackers can attempt things like inputting numerous password combinations and accessing web applications by searching for the correct session ID, among others.

How Brute Force Attacks Work

In simple terms, brute force attacks try to guess login passwords. Brute force password cracking comes down to a numbers game.

For most online systems, a password is encouraged to be at least eight characters long. Most passwords are eight characters long but are often a mix of numeric and alphabetic (case sensitive) characters, which is 62 possibilities for a given character in a password chain. If we combine 62 options for every character in an eight-character password, the result would be 2.18 trillion possible combinations. That is a lot of combinations for a cyberattacker to try.

In the past, if a hacker tried to crack an eight-character password with one attempt per second, it would roughly take seven million years at most. Even if the hacker were able to attempt 1000 combinations per second, it would still take seven thousand years.

Brute force attacks try to guess passwords to enter systems

It’s a different story nowadays with brute force hacking software having the power to attempt vastly more combinations per second than mentioned above. For example, let’s say a supercomputer can input 1 trillion combinations per second. With that amount of power, a hacker can reduce the time it takes to try 2.18 trillion password/username combinations to just 22 seconds!

Computers manufactured within the last decade have advanced to the point where only two hours are necessary to crack an eight-character alphanumeric password. Many cyber attackers can decrypt a weak encryption hash in months by using an exhaustive key search brute force attack.

The example above applies to password combinations of 8 characters in length. The time it takes to crack a password varies depending on its length and overall complexity.

Why Hackers Use Brute Force Attacks?

Hackers use brute force attacks during initial reconnaissance and infiltration. They can easily automate brute force attacks and even run them in parallel to maximize their chances of cracking credentials. However, that is not where their actions stop.

Once they gain access to a system, attackers will attempt to move laterally to other systems, gain advanced privileges, or run encryption downgrade attacks. Their end goal is to cause a denial of service and get data out of the system.

cyber kill chain process diagram

Brute force attacks are also used to find hidden web pages that attackers can exploit. This attack can be programmed to test web addresses, find valid web pages, and identify code vulnerabilities. Once identified, attackers use that information to infiltrate the system and compromise data.

Brute force attack programs are also used to test systems and their vulnerability to such attacks. Furthermore, a targeted brute force attack is a last resort option for recovering lost passwords.

Types of Brute Force Attacks

Brute force cracking boils down to inputting every possible combination access is gained. However, there are variants of this kind of attack.

diagram of the different kinds of brute force attacks hackers use

Dictionary Attack

A dictionary attack uses a dictionary of possible passwords and tests them all.

Instead of using an exhaustive key search, where they try every possible combination, the hacker begins from an assumption of common passwords. They build a dictionary of passwords and iterate the inputs.

With this approach, hackers eliminate having to attack websites randomly. Instead, they can acquire a password list to improve their chances of success.

Dictionary attacks often need a large number of attempts against multiple targets.

Simple Brute Force Attack

A simple brute force attack is used to gain access to local files, as there is no limit to the number of access attempts. The higher the scale of the attack, the more successful the chances are of entry.

Simple brute force attacks circulate inputting all possible passwords one at a time.

Hybrid Brute Force Attack

The hybrid brute force attack combines aspects of both the dictionary and simple brute force attack. It begins with an external logic, such as the dictionary attack, and moves on to modify passwords akin to a simple brute force attack.

The hybrid attack uses a list of passwords, and instead of testing every password, it will create and try small variations of the words in the password list, such as changing cases and adding numbers.

Reverse Brute Force Attack

The reverse brute force attack flips the method of guessing passwords on its head. Rather than guessing the password, it will use a generic one and try to brute force a username.

Credential Recycling

As it sounds, credential recycling reuses passwords. Since many institutions don’t use password managers or have strict password policies, password reuse is an easy way to gain access to accounts.

Because these cyberattacks depend entirely on lists of second-hand credentials gained from data breaches, they have a low rate of success. It’s essential to update usernames and passwords after a breach regularly, to limit the effectiveness of stolen credentials.

Rainbow Table Attacks

Rainbow table attacks are unique as they don’t target passwords; instead, they are used to target the hash function, which encrypts the credentials.

The table is a precomputed dictionary of plain text passwords and corresponding hash values. Hackers can then see which plain text passwords produce a specific hash and expose them.

When a user enters a password, it converts into a hash value. If the hash value of the inputted password matches the stored hash value, the user authenticates. Rainbow table attacks exploit this process.

If you’re concerned about impending cyber threats, a phoenixNAP consultant can walk you through our Data Security Cloud, the world’s safest cloud with an in-built threat management system.

Examples of Brute Force Attacks

How common are brute force attacks?

Brute force attacks are so frequent that everyone, from individuals to enterprises operating in the online realm, has experienced such an attack. The organizations that have been hit the hardest in the last couple of years include:

  • In 2018, Firefox’s master password feature was proven to be easily cracked with a brute force attack. It is unknown how many users’ credentials were exposed. In 2019. Firefox deployed a fix to resolve this issue.
  • In March 2018, Magento was hit by a brute force attack. Up to 1000 admin panels had been compromised.
  • In March 2018, several accounts of members of the Northern Irish Parliament had been compromised in a brute force attack.
  • In 2016, a brute force attack resulted in a massive data leak in the e-Commerce giant, Alibaba.
  • According to Kaspersky, RDP-related brute force attacks rose dramatically in 2020 due to the COVID-19 pandemic.

Every brute force attack’s end-goal attack is to steal data and/or cause a disruption of service.

How to Detect Brute Force Attacks

The key indication a bad actor is trying to brute force their way into your system is to monitor unsuccessful login attempts. If you see there have been many repeated failed login attempts, be suspicious. Watch for signs related to multiple failed login attempts from the same IP address and the use of multiple usernames from the same IP address.

Other signs can include a variety of unrecognized IP addresses unsuccessfully attempting to login to a single account, an unusual numerical or alphabetical pattern of failed logins, and multiple login attempts in a short time period.

It’s also possible for these cyberattacks to add you to a botnet that can perform denial-of-service attacks on your website. Aside from the above, spam, malware, and phishing attacks can all be the prerequisite of a brute force attack.

If you receive an email from your network service provider notifying you of a user from an unrecognized location logged into your system, immediately change all passwords and credentials.

In Conclusion, Stay Safe and Secure

The primitive nature of brute force attacks means there is an easy way to defend against them. The best defense against a brute force attack is to buy yourself as much time as you can, as these types of attacks usually take weeks or months to provide anything of substance to the hacker. The simplest precaution you can take to boost your accounts’ security is to use strong passwords.

It is also highly recommended to monitor servers and systems at all times. Utilizing a threat management system can significantly help as it detects and reports issues in real-time.

For more information, read our detailed knowledge base article on how to prevent brute force attacks.

]]>
7 Best Practices For Securing Remote Access for Employees https://devtest.phoenixnap.com/blog/secure-remote-access-best-practices Thu, 07 May 2020 14:23:31 +0000 https://devtest.phoenixnap.com/blog/?p=77216

How do you maintain security when employees work remotely, and your team is transitioning to a remote workforce?

As remote work is becoming a more prevalent trend in business and considering the recent COVID-19 outbreak, there’s no better time for employees and companies alike to make strides in securing remote work.

This guide aims to instruct employees and management of businesses, both small and large, of the tools and steps available to them.

working from home cybersecurity for employees

Employing only one of the following security measures will not be enough to thwart cyber threats. Each security measure, in isolation, will not guarantee secure remote work; however, when used in tandem with multiple measures, it creates a compounding effect for your cybersecurity.

1. Develop a Cybersecurity Policy For Remote Workers

If your business allows remote work, you must have a clear cybersecurity policy in place so that every employee’s access to company data is secure. Without a strategy in place, any employee can easily become an entry-point for a hacker to hijack your organization’s network.

To prevent this from happening, create a cybersecurity policy stipulating guidelines complying with security protocols at home or travel. Policies may include the expected use of approved messaging programs with encryption, such as Signal or WhatsApp; updating and patching computer security schedules, like updating antivirus or anti-malware software; and protocols on remotely wiping devices if lost.

Company-owned Devices

If your business has the means to give its employees laptops, you should consider it. This strategy is the best way to secure remote work because you can have your IT department manually configure firewall settings and install antivirus and anti-malware.

Conduct Regular Back-ups to Hard Drives

Any business is as good as its data. Most companies nowadays store data online on cloud storage services that are protected by encryption; although, regularly backing-up to a physical drive is also encouraged, as they cannot be hacked remotely.

Third-Party Vendors

Direct employees aren’t the only ones who risk compromising your company’s internal network. Third-party vendors are also responsible for creating entry-points into system infrastructure; therefore, your policy should extend to them as well.

Target’s data breach is an example of a breach caused by excessive privileges from third-party vendors. The Target example illustrates the need for organizations to reform their policy when issuing privileges to third-parties; otherwise, they may inadvertently create weak links in their security.

With third-party vendors in mind, you can gain a better understanding of your third-party environment by taking inventory of all vendor connections. Once you have an idea, it’s possible to increase your security by monitoring and investigating vendor activity through conducting session recordings and looking for any sort of malicious activity or policy violation.

Service-Level Agreements

Provide a third-party vendor with a service-level agreement (SLA). This option will force vendors to adhere to your organization’s security policies; otherwise, they face penalties.

Eliminate Shared Accounts

A simple yet effective approach is to eliminate shared accounts among vendors. Without shared accounts, you decrease the risk of unauthorized access; this is yet another reason to invest in a password management tool.

Mobile Security

As business and life become more intertwined, employees often use their phones for work purposes. Although working from your mobile device can pose a security risk to your business.

Inform your employees of the danger of unsecured Wi-Fi networks. When using unsecured Wi-Fi, your phone is exposed to potential hackers looking to compromise your device. To prevent any unwanted intrusions, only use encrypted software to communicate.

It’s also best to restrict the use of applications on your mobile device when working. You can do this by delving into your phone’s permission settings for applications (app permissions).

Finally, turning off Bluetooth when working can limit paths to intrusion.

Network Border Protection

For large businesses, network traffic can be filtered to process the flow of legitimate traffic and block potential intruders looking to exploit your network. This filtering means you can analyze and prevent inbound requests that come from unauthorized IP addresses, as these are inherent risks to your system. Configuration blocking incoming requests from unknown sources can be set in your firewall’s inbound rules.

2. Choose a Remote Access Software

When telecommuting, there are three primary ways to secure your work online. Your options are using either remote computer access, virtual private networks, or direct application access. Each method has its benefits and drawbacks. Choose the method that works best for your organization.

an employee working from home on a laptop

Desktop Sharing

Remote PC access methods, such as desktop sharing, connect a remote computer to the host computer from a secondary location outside of the office. This setup means the operator has the ability to access local files on the host computer as if they were physically present in the office.

By logging in to third-party applications, an employee can turn a portable device into a display to access data on their office computer.

Even though the benefit of direct access exists, this kind of software carries a high risk of exposing the company’s internal network to danger because it creates an additional end-point for external threats to access the business’ local area network.

To combat potential risk, not only does the organization have to encrypt its firewalls and communications, the employee’s computer requires the same level of encryption. Depending on the size of your business, this option may be too costly to avail.

Applications such as LogMeIn, TeamViewer, and GoToMyPC provide this type of service.

Virtual Private Network

A virtual private network (VPN) is software that creates a secure connection over the internet by encrypting data. Through the process of using tunneling protocols to encrypt and decrypt messages from sender to receiver, remote workers can protect their data transmissions from external parties.

Most commonly, remote workers will use a remote access VPN client to connect to their organization’s VPN gateway to gain access to its internal network, but not without authenticating first. Usually, there are two choices when using VPNs: IP Security (IPsec) or Secure Sockets Layer (SSL).

IPsec VPNs are manually installed and configured on the remote device. They will require the operator to input details such as the gateway IP address of the target network as well as the security key to gain access to the corporate network.

SSL VPNs are newer and easier to install. Instead of manually installing the VPN, the network administrator publishes the VPN client to the company firewall and provides it for public download. Afterward, the employee can download the VPN client from a target web page.

The drawback of a VPN connection is any remote device that uses a VPN has the possibility of bringing in malware to the network it connects to.

If organizations plan to use VPNs for remote work, it’s in their best interest to have employees with remote devices to comply with its security policies.

VPN installation varies based on operating system and type; although, it is quite simple to do.

Direct Application Access

The lowest risk option for remote work is directly accessing work applications. Instead of accessing an entire network, employees can remotely work within individual applications on the network.

In using this method to work, there’s little risk in exposing a company’s internal network to cyber predation. Due to the use of granular, perimeter applications on the network’s infrastructure, there are limited attack surfaces for susceptible data breaches.

Direct application access highly limits the risk of bad actors; in the same vein, it constricts work to the confines of one application. With little connection to all the data on the company’s network, the amount of work an employee is capable of pales in comparison to the aforementioned remote access methods.

3. Use Encryption

As important as it is to choose an access method for your online workers, it’s equally important those methods use encryption to secure remote employees’ data and connections.

Simply put, encryption is the process of converting data into code or ciphertext. Only those who possess the key or cipher can decrypt and use the data.

Encryption software is an added layer of protection for businesses and remote workers. For instance, if a remote employee’s computer is lost or misplaced, and a malicious actor recovers it, encryption software is the first line of defense in deterring unauthorized access.

Advanced Encryption Standard

As it stands, most businesses have the security protocol to use Advanced Encryption Standard (AES) to secure data due to its compatibility with a wide variety of applications. It uses symmetric key encryption, meaning the receiver uses a key to decode the sender’s data. The benefit of its use over asymmetric encryption is it’s faster to use. Look for encryption software that uses AES to secure company data.

End-to-end Encryption

When it comes to using things like email and software for general communication, look for applications that use end-to-end encryption, as it uses incredibly strong encryption that cannot be hacked if the two end-points are secure.

diagram of end to end encryption for employees

4. Implement a Password Management Software

Since most data breaches occur due to the use of illegally acquired credentials, password management software is an invaluable solution to remote work security.

Random Password Generation

Password management software does vastly more than just store passwords; it can also generate and retrieve complex, random password combinations it stores in an encrypted database. With this power, businesses can entirely reduce the use of the same or similar passwords.

Having all similar passwords has far-reaching consequences. For example, if a bad actor obtains your username and password, they can use those credentials as potential logins for other applications or web properties. Suffice to say, humans tend to reuse passwords, with or without small variations, due to our limited memory capacity. Unique strong passwords can eliminate this from ever happening and the rabbit hole of consequence that follows.

Automated Password Rotation

Additionally, password management software can entail automated password rotation. As the name suggests, passwords are constantly reset to limit the time of potential use. By decreasing the lifespan of a password, sensitive data becomes less vulnerable to attack.

One-time-use Credentials

Another strategy you can utilize to protect your data with passwords is to create one-time-use credentials. To enact one-time-use credentials, create a log of passwords in a spreadsheet acting as a “safe.” When you a single-use password for business reasons, have the user label the password in the spreadsheet as “checked out.” Upon completion of the task, have the user check-in the password again and retire it.

5. Apply Two-factor Authentication

Authenticating the identity of a user is an essential aspect of access control. To gain access, typically, one would require a username and password. With two-factor authentication, you can increase remote work security by creating two requirements necessary for login instead of one. Essentially, it creates an added layer of login protection.

Two-factor authentication uses two pieces of information to grant access. It uses credentials such as username and password in conjunction with either a secret question or pin code, which goes to the user’s phone or email. This method makes it hard for malicious actors to access systems, as it’s unlikely they will have access to both pieces of information.

It is recommended businesses adopt this security measure for system log-ins.

6. Employ the Principle of Least Privilege

An effective method to mitigate security risk is to limit the privileges of your workers.

Network security privileges come in three flavors: super users, standard users, and guest users, with diminishing privileges in that order. Guest users have no bearing in this discussion, however.

Superusers are those who have full access to system privileges. They can issue changes across a network by completing actions such as installing or modifying software, settings, and user data. It is when superuser accounts fall into the wrong hands, and calamity occurs on the largest scale. Depending on which operating system you use, super users go by different names: administrator accounts in Windows systems and root accounts in Linux or Unix systems

The second user account of note is the standard user, also known as the least privileged user, and it has a limited set of privileges. This restricted account is the one you want your workers to use most of the time, especially if they don’t belong in your IT department.

As a precaution, we recommend having all employees use standard user accounts for routine tasks. Only give superuser privileges to trusted members of your IT team and have them only use these particular accounts to perform administrative duties when absolutely necessary. This approach, known as the principle of least privilege, dramatically eliminates the risk of a severe data breach by limiting excess.

Remove Orphaned Accounts

Orphaned accounts are problematic because they are old user accounts that contain data encompassing usernames, passwords, emails, and more. These accounts generally belong to former employees, who have no current connection to the company. These past employees may have moved on, but their accounts might still be on your network and remain accessible.

The problem is they are hard to see if your organization doesn’t know they exist. If you possess orphaned accounts on your network and external or internal threats find them, they can be used to escalate their privileges. These attacks are known as pass-the-hash (PtH) attacks. These insidious attacks leverage the use of low-level credentials to gain entry into your network and aim to steal the password hash from an admin account. If stolen, hackers can reuse the hash to unlock administrative access rights.

The best way to find and remove orphaned accounts, and any potential threats, is to use a privileged access management solution. These tools help to locate and remove lingering accounts.

7. Create Employee Cybersecurity Training

Internal personnel represents a large share of the danger facing a company’s network security. In fact, just over one-third of all data breaches in 2019 occurred due to a malicious or negligent employee.

That doesn’t have to be the case. Instead, businesses can alleviate the danger of insider threats by cultivating a security culture through training employees on cybersecurity best practices.

multiple employees working online

Physical Security of Devices

To begin, secure remote employees by encouraging them to lock computers when traveling physically. If there’s no physical access to their device, the chances of foul play remain low. Secondly, when employees work in public locations, instruct them to be aware of any onlookers when typing in sensitive information, such as logins or passwords. This phenomenon is called “shoulder surfing” and is more effective than it seems.

Instruct employees to always log-off or shut down their computers when not in use. Leaving a computer on that is not password-protected is as effective for system entry as any malware attack.

Lastly, if passwords get written down on paper, have your workers rip-up these papers instead of merely throwing them in the trash.

Safe Internet Protocols

If your business is unable to provide laptops or computers with internet restriction applications to remote staff, you can set guidelines for best practices in safe browsing, installing pop-up blockers, and downloading of trusted applications for work.

Social Engineering Attacks

Malicious actors that use human psychology to trick people into giving sensitive information are called social engineers. These social engineering attacks come in multiple forms; however, the most common are called phishing attacks.

Hackers design these attacks to mislead employees to a fake landing page to steal information or install malware that they use to compromise network security. Most commonly, phishing attacks occur from unsolicited emails. Therefore, train staff to never open unsolicited emails, click unknown links in messages and beware of attachments.

Secure Your Remote Workforce

In a globally decentralized business landscape, malicious actors will continually present a risk to business network security. It is with this danger in mind; businesses must take preventative measures in securing remote work for their employees or suffer the consequences. For more in-depth instruction watch our expert present more on Infrastructure Security for Remote Offices:

No matter the size of your business, there are affordable solutions you can exercise to protect your livelihood. If you need help in determining which option is best for your business, enlist the help of our experts today for a consultation. Hear one of our experts speak about the importance of Keeping a Tight Grip on Office 365 Security While Working Remotely.

Furthermore, learn about vulnerability assessment to complete the process of securing your network.

]]>
5 Automated Security Testing Best Practices https://devtest.phoenixnap.com/blog/automated-security-testing-best-practices Mon, 06 Apr 2020 23:53:14 +0000 https://devtest.phoenixnap.com/blog/?p=76442

Tech companies suffered countless cyber-attacks and data breaches in 2019 due to ‘compromised’ applications. Security defects in the code are now common occurrences because of rapid software development. Therefore, conducting traditional security tests do not suffice to provide full-proof protection against such attacks.

In the software world, there has never been a better time to integrate Application Security Tools into the Software Development Life Cycle (SDLC) mainly to lend support to development teams with regular and continuous security testing.

What is Automated Security Testing?

Automated testing is a practice (Read: tool) to reveal potential flaws or weaknesses during software development. Automated testing occurs throughout the software development process and does not negatively affect development time. The entire automated security testing process ensures that applications you are developing deliver the expected results and reveal any programming errors in the beginning.

automated-testing-stacks

Before we go further, do you know that almost 40% of all significant software testing is now automated?

Despite this, a significant amount of testing nowadays is conducted manually and at the development cycle’s final stages. Why? Because a large number of developers at the companies are not well-equipped to develop automated test strategies. The advantage of automated testing when developing software internally or for production is that you can use it to reveal potential weaknesses and flaws without slowing the development time.

DevSecOps

DevSecOps refers to an emerging discipline in this field. As software companies branch into new sectors such as wearables and IoT, there is a need for a thorough audit of all the current tools to combat security issues that may arise during the development process.

In this article, we are listing the general process and best practices of automated security testing.

  • Conducting a Software Audit:

    The first step in automated security testing should begin with a complete audit of the software. During the audit, companies can quickly discover any significant risks emerging from the product. It is also the best way to integrate automation seamlessly into a client’s current workflow.

  • Seeking out Opportunities for Automation:

    Since the past few years, companies are facing a strong push towards the automation of routine, repetitive, and mundane tasks. This wave of automation has come to the software testing world as well. In general practice, some primary factors determine if the company should automate a specific task or not. Factors like

If the tasks are straightforward: The very basic factor is the simplicity of the task. The Automation process should start with the simplest tasks available and slowly move towards covering more complex tasks. In companies, all complex tasks, at some point, still need human interaction. Some of the simple tasks include file and database system interactions.

If the tasks are repetitive and mundane: Automation is also ideal for those frequent tasks that are mundane and repetitive. With automated testing processes, you can also repeat a multitude of programmed actions to ensure the program’s consistency.

If the process is data-intensive: Automation is also helpful to comb through large volumes of data at once in an efficient and timely manner, making it ideal for data-intensive processes. To ensure that the correct use of data, testers can also use special automation tools to perform tests with even overwhelming sets of data.

Companies usually perform automated testing on some specific areas of software testing. Those areas include:

  • Tools for code analysis: Code analysis tools can secure DevOps efforts, which automatically scan codes and identify any vulnerabilities present within the code itself. As a result, software teams receive some invaluable information while they work and identify problems before the quality assurance team.
  • Scanning for appropriate configurations: Certain software tools can ensure the correct configuration of applications to use in specific environments, such as mobile environments or web-based environments.
  • Application-level testing: During application-level testing, scanners such as OWASP Zap and Burb Intruder can also ensure that applications are not carrying out any malicious actions.

automated security testing vs manual penetration testing

Bringing the Team on Board

Software teams are traditionally reluctant to integrate automation into their testing process. Why? Apart from the fear of change, the biggest reason is their wrong perception of the results’ accuracy. Many developers also consider automated testing more costly and time-consuming.

Automated security testing is NOT a replacement for manual testing in terms of accuracy. It is only a practice to automate the most mundane, tedious, and repetitive tasks in the testing processes.

Some issues that come up in automation do exist. These are risks in which a human needs to determine the logic that a computer would need to see the flaw. As an example, a system that gives every user permissions to modify and edit all files freely.

An automated system would have no way of knowing what the intended behavior is, nor would it have any idea of understanding the risk that this implies. This is where humans are introduced to the process.

It’s also why automated security testing should not replace manual testing, which is the only way to ensure thoroughness and accuracy.

Instead, it’s intended to automate the most tedious, mundane, and repetitive tasks associated with testing. Through this, the programming team can have more time to test the areas of the solution that requires manual testing, such as the program’s internal logic.

Another common issue with the software teams is the overestimation of the required time to develop an automated process. Modern software testing systems are not overly expensive or time-consuming owing to the number of frameworks and APIs available. The key is to find out what works for your organization or not, and that will ultimately save the organization time, money, and resources.

Selecting the Right Automation Tools

When choosing to automate the software testing process, developers have a myriad of choices to choose from, both commercial as well as open-source solutions. While Open source solutions are robust and have a well-maintained framework, they sometimes lack the advanced technology or customer service that comes with a commercial solution. Some of those tools are.

  • Contrast Security: Contract Security is a runtime application security tool that runs inside applications to identify any potential faults.
  • Burp Intruder: Burp Intruder is an infrastructure scanner, used to ensure whether applications are interacting correctly with the environment.
  • OWASP ZAP: OWASP ZAP is an infrastructure scanner which is open-source in nature. It functions similarly to Burp Intruder.
  • Veracode: Veracode refers to a code analysis tool to find vulnerabilities within an application structure.
  • BDD Security: BDD Security is a test automation framework where users can employ natural language syntax to describe security functions as features.
  • Mittn: Mittn is an open-source test automation framework that uses the Python programming language.
  • Microsoft Azure Advisor: Microsoft Azure advisor is a cloud-based consultant service that provides recommendations according to an individual’s requirements.
  • GauntIT: GauntIT is a test automation framework, ideal for those accustomed with Ruby development.

Depending on the company’s automation strategy, it may have to create custom scripting for their automation processes. The company’s network can use ‘Custom Scripting’ to make it more lightweight, customized, and optimized.

Custom scripting has the benefit of being tailored to your network security threats. However, it can be a hefty-cost solution, also requiring an internal development team. To make sure you choose the right solution for your needs, consider following the process in the image below:

automated security and testing

Integrating Automated Testing Processes

The integration of automated testing processes to a company’s product pipeline is an iterative process. During the software development phase, there is continuous testing to find out potential risks and flaws. Processes like these ensure that the potential vulnerabilities do not remain unaddressed.

A significant chunk of the security-related testing occurs in the later stages of the production cycle, causing issues and delays to the product and the company. However, if the companies perform consistent testing, it leads to a more thoroughly secured product and avoids last-minute delays before release.

Breaking Large Projects into Smaller Steps

When working with large intensive projects, DevSecOps works well if the project consists of smaller, manageable steps. Instead of automating the entire solution at once, the formation of smaller automated processes within the larger production cycle leads to a better result.

Following this process would not only avoid any hiccups within the development cycle but also give developers the required time to adjust to newer automation standards. To acclimatized developers to the latest standards and to ensure training is in-depth and non-disruptive, introducing new tools one by one is also a good practice to follow.

Checking for Code Dependencies

The days of in-house coding has vanished mainly as most organizations do not develop codes in-house. They tend to use many third-party open-source codes for each application, which has some significant vulnerabilities. Organizations are thus required to automate their processes after identifying the code dependencies, ensuring that third-party code has no known vulnerabilities.

Testing against Malicious Attacks

Due to the rise of the rate of cybercrimes, applications should go through rigorous testing to prevent denial of service attacks (DDoS) and other malicious attacks. Broken solutions reveal some particular vulnerabilities, and this is why it is essential to conduct stringent tests on the application under challenging circumstances.

Organizations are seeing an increasing number of malicious attacks. These attacks may focus on any aspect of a client’s organization that is accessible from outside of the network. By regularly testing your application under particularly strenuous circumstances, you can secure it through various scenarios.

Training Development Team in Best Practices

In-depth training of programmers is also vital to avoid already identified vulnerabilities and flaws from occurring again in later production cycles. It is a proactive approach to make applications more inherently secure. This simple approach does not only improve the consistency of the product, but it also avoids costly modifications, if you discover flaws at the later stage.

As you identify vulnerabilities and flaws within your software solutions, programmers will need the training to avoid these issues in further production cycles.

Though the process of identifying issues is automated, the problems that are found should still be logged for the benefit of upcoming projects and future versions of the product. By training programmers proactively, an organization can, over time, make their applications more inherently secure.

Not only does this improve the consistency of the end product, but it also avoids costly modifications when flaws are discovered and require mitigation. Via training and company-wide messaging, developers can be trained on coding more securely.

If developers do not become apprised of issues, the same mistakes will continue to happen. Automated testing will not be as effective as it could be. It isn’t just cheaper and faster than manual testing; it’s also more consistent. Every test will run identically on each application and in each environment.

By automatically testing applications and identifying lax policies, the software life cycle for both on-premise and cloud-based web applications becomes shorter.

Through the years, organizations have still been manually testing their software security in-house or by professionals. However, by implementing automated testing as a standard practice, they can streamline their product deployment process to a high degree, reducing the overheads associated with the process. Regular training ensures that software teams are incorporating automation best practices into their processes.

Choosing Automated over Manual Testing

Automated testing is not only cheaper and faster than manual testing, but it is also much more consistent. It doesn’t make mistakes as each test runs identically on different applications and environments, and that can save you both time and money. Keeping manual tests in place only where human assessment is needed also conserves your company’s human resources.

To implement automated testing, organizations will require large-scale efforts to promote and apply best practices throughout their projects. Including training their software teams so they can incorporate it into their respective processes seamlessly. Need more detailed advice on how to automate security testing? Reach out to one of our experts today.

]]>
What is DevSecOps? Best Practices for Adoption https://devtest.phoenixnap.com/blog/what-is-devsecops-best-practices Tue, 31 Mar 2020 23:53:50 +0000 https://devtest.phoenixnap.com/blog/?p=77062

Software applications are the backbone of many industries. They power many businesses and essential services. A security lapse or failure in such an application can result in financial loss, as well as a tarnished reputation. In some extreme cases, it can even result in loss of life.

What is DevSecOps?

DevSecOps is the method that integrates security practices within the DevOps process. It creates and promotes a collaborative relationship between security teams and release engineers based on a ‘Security as Code’ philosophy. DevSecOps has gained popularity and importance, given the ever-increasing security risks to software applications.

DevSecOps integrates security within your product pipeline in an iterative process. It thoroughly incorporates security with the rest of the DevOps approach.

devsecops vs devops comparison diagram

As teams develop software, testing for potential security risks and flaws is critical. Security teams must address issues before the solution can move ahead. This iterative process will ensure that vulnerabilities do not go unaddressed.

As DevSecOps is still a new and emerging discipline, it may require some time to gain mainstream acceptance and integration. A significant amount of security tests take place late in the production cycle. This delay can cause major issues for companies and their products. As security is usually is one of the last features considered in the development process. If you keep security at the end of the development pipeline, when security issues come up near launch, then you will find yourself back at the start of long development cycles.

When security concerns are raised late in the production cycle, teams will have to make significant changes to the solution before rolling it out. An interruption in production will ultimately lead to a delay in deliverables. Thus, ignoring security issues can lead to security debt later in the lifecycle of the product. This is an outdated security practice and can undo the best DevOps initiatives. So the DevSecOps goal is to begin the security team’s involvement as early as possible in the development lifecycle.

DevSecOps implementation in Cloud

The DevSecOps method needs development and operations teams to do more than just collaborate. Security teams also need to join in at an early stage of the iteration to ensure overall software security, from start to end. You need to think about infrastructure and application security from the start.

Not only does consistent testing lead to secure code, but it also avoids last-minute delays by spreading the work predictably and consistently throughout the project. Through this process, organizations can better achieve their deadlines and ensure that their customers and end-users are satisfied.

IT security needs to play an integrated role in your applications’ full life cycle. You can take advantage of the responsiveness and agility of a DevOps approach by the incorporation of security into your processes.

The primary areas of software security testing are being adopted:

Application security testing

As software applications are run, solutions can scan the application to ensure that malicious actions are not being taken. Scanners such as Burb Intruder and OWASP Zap automation will test and examine applications, to ensure that they aren’t taking steps that could be perceived as malicious by end-users.

Scanning for the appropriate configurations

Software tools can be designed to ensure that the application is configured correctly and secured for use in specific environments, such as the Microsoft Azure Advisor tool for cloud-based infrastructure. Many automated testing tools are designed to operate in a particular environment, such as a mobile environment or web-based environment. During the development of software, it can be ensured that the software is being built to these appropriate standards.

Code analysis tools

Code analysis tools can strengthen DevOps security efforts by automatically scanning the code and identifying potential and known vulnerabilities within the code itself. This can be invaluable information as the software teams work, as they will be able to identify problems before they are caught in quality assurance. This can also help them in developing better coding habits.

DevSecOps Best Practices

DevSecOps integrates security into the development lifecycle, but it is not possible to do so hastily and without planning. Include it in the design and development stages. Companies can work to change their workflows by following some of the best practices of the industry.

Get your teams on board

It may seem trivial, but getting all the required teams working together can make a huge difference in your DevSecOps initiative. Development teams are familiar with the typical process of handing off newly released iterations to Quality Assurance teams. This isolated behavior is the norm in companies that have each team in a silo.

Companies should eliminate silos and bring development, operations, and security teams together. Unity across teams will enable the experts in these groups to work together from the beginning of the development process and foresee any challenges.

Threat modeling is one way to plan for and identify possible security threats to your assets. You examine the types and sensitivities of your assets and analyze existing controls in place to protect those assets. By identifying the gaps you can address them before they become an active problem.

These types assessments can help identify flaws in the architecture and design of your applications that other security approaches might have missed.

The first step in implementing a DevSecOps culture is to educate your teams that security is a shared responsibility of teams from all three disciplines. Once development and operations teams take on the shared responsibility of securing code and infrastructure, DevSecOps becomes a natural part of the development cycle.

trainings

Many DevOps teams still have the misconception that security assessment causes delays in software development and that there should be a trade-off between security and speed. DevSecOps events and training are excellent opportunities to rid teams of these misconceptions. Real-life examples and case studies can help to get buy-in from teams and management alike.

Educate your developers

Developers are almost single-handedly responsible for the quality of the code they develop. Coding errors are the cause of many security vulnerabilities and issues. But companies pay little attention to their developers’ training and skill enhancement when it comes to producing secure code.

Educating them in the best practices of coding can directly contribute to improved code quality. Better code quality leaves less room for security vulnerabilities. Security teams will also find it easier to assess and remedy any vulnerabilities in high-quality code.

‘Common software weaknesses’ is another area in which most developers are unfamiliar. Teams can use online tools like the Common Weakness Enumeration list as a reference. Listings are useful to developers who are not that familiar with security practices.

Security teams, as part of their commitment to DevSecOps, must undertake to train development and operations teams regarding security practices. Such training will enable developers to integrate security controls into the code.

Compliance (HIPAA, GDPR, PCI) is vital for applications in industries such as finance and medicine. Development teams must be familiar with these standards and keep in mind the requirements to ensure compliance.

Verify code dependencies

Very few organizations today develop their code all in-house. It is more likely that each application will be built on a large amount of third-party, open-source code.

Despite the risk, many companies use third-party software components and open-source software in applications instead of developing from scratch. Yet they lack the automatic identification and remediation tracking for bugs and flaws that may exist in open-source software. Due to the pressure of meeting customer demands developers rarely have the opportunity to review code or documentation.

This is where automated testing plays a significant role in regularly test open-source and third-party components. It’s a main requirement in the DevSecOps methodology. It’s critical to find out if open-source usage is causing any weaknesses or vulnerabilities in your code. You need to find out how it impacts dependent code. It will help you identify issues that help reduce the meantime to resolution.

Third-party code can represent some significant vulnerabilities. Organizations will need to identify their code dependencies and automate the process of ensuring that their third-party code has no known vulnerabilities and is being updated as it should be throughout the process of creation.

There are utilities available that can continuously check a database of known vulnerabilities to quickly identify any issues with existing code dependencies. This software can be used to swiftly mitigate third-party threats before they are incorporated into the application.

devsecops model

Enhance Continuous Integration with DevOps Security

DevOps teams typically use Continuous Integration (CI) tools to automate parts of the development cycle, such as testing and building. These are routine tasks that teams need to repeat with each release.

Enhancing Continuous Integration processes and tools with security controls ensures that security practitioners identify issues before validating builds for Continuous Delivery (CD). CI also reduces the time spent on each iteration.

For example, using (SAST) static application security testing on daily builds will help you ensure that you’re only scanning for instances or items of interest in the changes to your code that were committed that day.

DevSecOps teams need to use vulnerability assessment scanning tools to ensure that they identify security issues early in the development cycle. They can use pre-production systems for this type of testing.

Simplify your code

Simpler code is easier to analyze and fix. Developers will find debugging their code much easier when it is simple and easier to read. Simple and clean code will also lead to reduced security issues. Developers will be able to quickly review and work on each other’s code if it is simple.

More significantly, security teams will be able to analyze simple code more efficiently. So releasing code in smaller chunks will allow security teams to identify issues sooner and with less effort. By choosing one section to analyze and prove it works, before moving on to the next bit will streamline the process. It will reduce the probability of security vulnerabilities and leads to robust applications.

Security as code

‘Security as Code’ is the concept of including security best practices into the existing DevOps pipeline. One of the most critical processes that this concept entails is the static analysis of code. Security practitioners can focus testing on code that has changed, in contrast to analyzing the entire code base.

Implementing a good change management process will allow members of all teams to submit changes and improvements. This type of process will enable security teams to remedy security issues directly without disrupting the development cycle.

Automation is another essential aspect of ‘security as code.’ Teams can automate security tasks to ensure that they conventionally verify all iterations. This uniformity will help to reduce or eliminate the presence of known security issues. Automation can significantly reduce the time spent on troubleshooting and fixing security issues later in the development cycle.

Put your application through security checks

Your application should be subject to regular testing. It should also undergo more rigorous testing such as preventing denial of service attacks.

There may be vulnerabilities in a solution that are only evident when that solution is broken. These are still genuine problems that the product owner may face.

Organizations are seeing an increasing number of malicious attacks. These attacks may focus on any aspect of a client’s organization that is accessible from outside of the network.

By testing your application under particularly strenuous circumstances, you can secure it through various scenarios.

How to Implement DevSecOps?

Each of the teams involved in DevSecOps needs to contribute towards its success.

devsecops-implementation

Development

Developers perform an essential role in the DevSecOps process. Developers must be open to the involvement of operations and security teams. The participation of these teams from an early stage of the design and development process will facilitate a secure DevOps transformation and make applications overall more secure.

Training developers in security best practices is essential to success. Companies can supplement this training with hiring developers who have experience in DevSecOps so that they can guide the rest of the team.

Companies must build a culture where developers are aware that developing security is a shared responsibility between them and security teams. Security practitioners can only recommend security practices. It is the responsibility of developers to implement them.

Operations

The contribution of the operations team is similar to that of the development team. Operations teams must collaborate with security practitioners. They are responsible for subjecting infrastructure and network configurations to security tests.

Security teams will also need to train operations teams regarding security practices to make DevSecOps successful. Operations and security teams, in collaboration, will then set up both manual and automated security tests to ensure compliance with network configurations.

Security

DevSecOps is as much of an adjustment for security teams as it is for development and operations teams. Security teams have to gradually increase their involvement while cooperating with development and operations teams.

Security practitioners should start with the concept of ‘shifting left.’ That is, collaborating with development and operations teams to move security reviews and automated tests towards the beginning of the software development lifecycle. This process of shifting left is essential to reduce the chances of unforeseen security issues popping up later.

Development and operations teams usually see security tests as a tedious and complicated task. So the duty of security teams does not stop at developing security tests but extends to involving and training the other teams.

DevSecOps is the Future

DevSecOps methodology has gained momentum due to the high cost of correction of security issues and security debt. As Agile teams release applications more frequently, security testing becomes more crucial. We hope some of the best practices mentioned in this article will help your company to transition from DevOps to a DevSecOps approach.

If you wish to learn more about how to adopt DevSecOps, contact our experts today.

]]>
17 Best Vulnerability Assessment Scanning Tools https://devtest.phoenixnap.com/blog/vulnerability-assessment-scanning-tools Mon, 23 Mar 2020 14:53:48 +0000 https://devtest.phoenixnap.com/blog/?p=76439

Vulnerability scanning or vulnerability assessment is a systematic process of finding security loopholes in any system addressing the potential vulnerabilities.

The purpose of vulnerability assessments is to prevent the possibility of unauthorized access to systems. Vulnerability testing preserves the confidentiality, integrity, and availability of the system. The system refers to any computers, networks, network devices, software, web application, cloud computing, etc.

vulnerability assessment process flowchart

Types of Vulnerability Scanners

Vulnerability scanners have their ways of doing jobs. We can classify the vulnerability scanners into four types based on how they operate.

Cloud-Based Vulnerability Scanners

Used to find vulnerabilities within cloud-based systems such as web applications, WordPress, and Joomla.

Host-Based Vulnerability Scanners

Used to find vulnerabilities on a single host or system such as an individual computer or a network device like a switch or core-router.

Network-Based Vulnerability Scanners

Used to find vulnerabilities in an internal network by scanning for open ports. Services running on open ports determined whether vulnerabilities exist or not with the help of the tool.

Database-Based Vulnerability Scanners

Used to find vulnerabilities in database management systems. Databases are the backbone of any system storing sensitive information. Vulnerability scanning is performed on database systems to prevent attacks like SQL Injection.

man using vulnerability assessment methodology

Vulnerability Scanning Tools

Vulnerability scanning tools allow for the detection of vulnerabilities in applications using many ways. Code analysis vulnerability tools analyze coding bugs. Audit vulnerability tools can find well-known rootkits, backdoor, and trojans.

There are many vulnerability scanners available in the market. They can be free, paid, or open-source. Most of the free and open-source tools are available on GitHub. Deciding which tool to use depends on a few factors such as vulnerability type, budget, frequency of how often the tool is updated, etc.

1. Nikto2

Nikto2 is an open-source vulnerability scanning software that focuses on web application security. Nikto2 can find around 6700 dangerous files causing issues to web servers and report outdated servers based versions. On top of that, Nikto2 can alert on server configuration issues and perform web server scans within a minimal time.
Nikto2 doesn’t offer any countermeasures for vulnerabilities found nor provide risk assessment features. However, Nikto2 is a frequently updated tool that enables a broader coverage of vulnerabilities.

2. Netsparker

Netsparker is another web application vulnerability tool with an automation feature available to find vulnerabilities. This tool is also capable of finding vulnerabilities in thousands of web applications within a few hours.
Although it is a paid enterprise-level vulnerability tool, it has many advanced features.  It has crawling technology that finds vulnerabilities by crawling into the application. Netsparker can describe and suggest mitigation techniques for vulnerabilities found. Also, security solutions for advanced vulnerability assessment are available.

3. OpenVAS

OpenVAS is a powerful vulnerability scanning tool that supports large-scale scans which are suitable for organizations. You can use this tool for finding vulnerabilities not only in the web application or web servers but also in databases, operating systems, networks, and virtual machines.
OpenVAS receives updates daily, which broadens the vulnerability detection coverage. It also helps in risk assessment and suggests countermeasures for the vulnerabilities detected.

4. W3AF

W3AF is a  free and open-source tool known as Web Application Attack and Framework. This tool is an open-source vulnerability scanning tool for web applications. It creates a framework which helps to secure the web application by finding and exploiting the vulnerabilities. This tool is known for user-friendliness. Along with vulnerability scanning options, W3AF has exploitation facilities used for penetration testing work as well.
Moreover, W3AF covers a high-broaden collection of vulnerabilities. Domains that are attacked frequently, especially with newly identified vulnerabilities, can select this tool.

5. Arachni

Arachni is also a dedicated vulnerability tool for web applications. This tool covers a variety of vulnerabilities and is updated regularly. Arachni provides facilities for risk assessment as well as suggests tips and countermeasures for vulnerabilities found.
Arachni is a free and open-source vulnerability tool that supports Linux, Windows, and macOS. Arachni also assists in penetration testing by its ability to cope up with newly identified vulnerabilities.

6. Acunetix

Acunetix is a paid web application security scanner (open-source version also available) with many functionalities provided. Around 6500 vulnerabilities scanning range is available with this tool. In addition to web applications, it can also find vulnerabilities in the network as well.
Acunetix provides the ability to automate your scan. Suitable for large scale organizations as it can handle many devices. HSBC, NASA, USA Air force are few industrial giants who use Arachni for vulnerability tests.

7. Nmap

Nmap is one of the well-known free and open-source network scanning tools among many security professionals. Nmap uses the probing technique to discover hosts in the network and for operating system discovery.
This feature helps in detecting vulnerabilities in single or multiple networks. If you are new or learning with vulnerabilities scanning, then Nmap is a good start.

8. OpenSCAP

OpenSCAP is a framework of tools that assist in vulnerability scanning, vulnerability assessment, vulnerability measurement, creating security measures. OpenSCAP is a free and open-source tool developed by communities. OpenSCAP only supports Linux platforms.
OpenSCAP framework supports vulnerability scanning on web applications, web servers, databases, operating systems, networks, and virtual machines. Moreover, they provide a facility for risk assessment and support to counteract threats.

9. GoLismero

GoLismero is a free and open-source tool used for vulnerability scanning. GoLismero focuses on finding vulnerabilities on web applications but also can scan for vulnerabilities in the network as well. GoLismero is a convenient tool that works with results provided by other vulnerability tools such as OpenVAS, then combines the results and provides feedback.
GoLismero covers a wide range of vulnerabilities, including database and network vulnerabilities. Also, GoLismero facilitates countermeasures for vulnerabilities found.

10. Intruder

Intruder is a paid vulnerability scanner specifically designed to scan cloud-based storage. Intruder software starts to scan immediately after a vulnerability is released. The scanning mechanism in Intruder is automated and constantly monitors for vulnerabilities.
Intruder is suitable for enterprise-level vulnerability scanning as it can manage many devices. In addition to monitoring cloud-storage, Intruder can help identify network vulnerabilities as well as provide quality reporting and suggestions.

11. Comodo HackerProof

With Comodo Hackerproof you will be able to reduce cart abandonment, perform daily vulnerability scanning, and use the included PCI scanning tools. You can also utilize the drive-by attack prevention feature and build valuable trust with your visitors. Thanks to the benefit of Comodo Hackerproof, many businesses can convert more visitors into buyers.

Buyers tend to feel safer when making a transaction with your business, and you should find that this drives your revenue up. With the patent-pending scanning technology, SiteInspector, you will enjoy a new level of security.

12. Aircrack

Aircrack also is known as Aircrack-NG, is a set of tools used for assessing the WiFi network security. These tools can also be utilized in network auditing, and support multiple OS’s such as Linux, OS X, Solaris, NetBSD, Windows, and more.

The tool will focus on different areas of WiFi security, such as monitoring the packets and data, testing drivers and cards, cracking, replying to attacks, etc. This tool allows you to retrieve the lost keys by capturing the data packets.

13. Retina CS Community

Retina CS Community is an open-source web-based console that will enable you to make a more centralized and straightforward vulnerability management system. Retina CS Community has features like compliance reporting, patching, and configuration compliance, and because of this, you can perform an assessment of cross-platform vulnerability.

The tool is excellent for saving time, cost, and effort when it comes to managing your network security. It features an automated vulnerability assessment for DBs, web applications, workstations, and servers. Businesses and organizations will get complete support for virtual environments with things like virtual app scanning and vCenter integration.

14. Microsoft Baseline Security Analyzer (MBSA)

An entirely free vulnerability scanner created by Microsoft, it’s used for testing your Windows server or windows computer for vulnerabilities. The Microsoft Baseline Security Analyzer has several vital features, including scanning your network service packets, checking for security updates or other windows updates, and more. It is the ideal tool for Windows users.

It’s excellent for helping you to identify missing updates or security patches. Use the tool to install new security updates on your computer. Small to medium-sized businesses find the tool most useful, and it helps save the security department money with its features. You won’t need to consult a security expert to resolve the vulnerabilities that the tool finds.

15. Nexpose

Nexpose is an open-source tool that you can use for no cost. Security experts regularly use this tool for vulnerability scanning. All the new vulnerabilities are included in the Nexpose database thanks to the Github community. You can use this tool with the Metasploit Framework, and you can rely on it to provide a detailed scanning of your web application. Before generating the report, it will take various elements into account.

Vulnerabilities are categorized by the tool according to their risk level and ranked from low to high. It’s capable of scanning new devices, so your network remains secure. Nexpose is updated each week, so you know it will find the latest hazards.

16. Nessus Professional

Nessus is a branded and patented vulnerability scanner created by Tenable Network Security. Nessus will prevent the networks from attempts made by hackers, and it can scan the vulnerabilities that permit remote hacking of sensitive data.

The tool offers an extensive range of OS, Dbs, applications, and several other devices among cloud infrastructure, virtual and physical networks. Millions of users trust Nessus for their vulnerability assessment and configuration issues.

17. SolarWinds Network Configuration Manager

SolarWinds Network Configuration Manager has consistently received high praise from users. The vulnerability assessment tool features that it includes addresses a specific type of vulnerability that many other options do not, such as misconfigured networking equipment. This feature sets it apart from the rest. The primary utility as a vulnerability scanning tool is in the validation of network equipment configurations for errors and omissions. It can also be used to check device configurations for changes periodically.

It integrates with the National Vulnerability Database and has access to the most current CVE’s to identify vulnerabilities in your Cisco devices. It will work with any Cisco device running ASA, IOS, or Nexus OS.

Vulnerability Assessment Secures Your Network

If an attack starts by modifying device networking configuration, the tools will be able to identify and put a stop to it. They assist you with regulatory compliance with their ability to detect out-of-process changes, audit configurations, and even correct violations.

To implement a vulnerability assessment, you should follow a systematic process as the one outlined below.

Step 1 – Begin the process by documenting, deciding what tool/tools to use, obtain the necessary permission from stakeholders.

Step 2 – Perform vulnerability scanning using the relevant tools. Make sure to save all the outputs from those vulnerability tools.

Step 3 – Analyse the output and decide which vulnerabilities identified could be a possible threat. You can also prioritize the threats and find a strategy to mitigate them.

Step 4 – Make sure you document all the outcomes and prepare reports for stakeholders.

Step 5 – Fix the vulnerabilities identified.

Vulnerability identification and risk assesment

Advantages of Scanning for Vulnerabilities

Vulnerability scanning keeps systems secure from external threats. Other benefits include:

  • Affordable – Many vulnerability scanners are available free of charge.
  • Quick – Assessment takes a few hours to complete.
  • Automate – can use automated functions available in the vulnerability tools to perform scans regularly without manual involvement.
  • Performance – vulnerability scanners perform almost all the well-known vulnerability scan.
  • Cost/Benefit – reduce cost and increase benefits by optimizing security threats.

Vulnerability Testing Decreases Risk

Whichever vulnerability tool you decide to use, choosing the ideal one will depend on security requirements and the ability to analyze your systems. Identify and deal with security vulnerabilities before it’s too late.

Take this opportunity now to look into the features provided by each of the tools mentioned, and select one that’s suitable for you. If you need help, reach out to one of our experts today for a consultation.

Learn about more of the best networking tools to improve your overall security.

]]>