Vulnerability Scanning

Tony MasonAPI Security, Data Protection, Penetration Testing, Vulnerability Management & SIEM, Vulnerability Scanning

Why scanning more often could deliver surprising benefits you may not have considered.

Can I just scan once per year, like with a penetration test?

Penetration tests are uniquely effective in uncovering highly complex vulnerabilities in web applications: those which may require detailed human awareness and context in order to detect. However, whilst irreplaceable, penetration tests can also be relatively expensive to deliver.  This is because they require significant time investment by highly skilled human penetration testers. Because of the costs, many organisations may, understandably, conduct them only an annual or bi-annual basis. However, automated vulnerability scanning (also known as “DAST” or “Dynamic Application Security Testing”) operates on a very different paradigm. A common misunderstanding by those establishing a vulnerability scanning programme for the first time is to apply existing schedules for penetration testing to vulnerability scanning without alteration. It is certainly possible to only run a vulnerability scan once per annum.  However, in doing so many of the benefits that the vulnerability scanning paradigm makes possible are left on the table.

How often should I run vulnerability scans?

When a vulnerability is introduced to a website or service, the clock starts ticking on a window of opportunity for attackers to exploit it before the organisation operating the service notices the vulnerability and remediates it. In cybersecurity this is known as the “attack window” for a vulnerability. The longer the attack window is open, the greater the opportunity for attackers and hence the greater the risk to the organisation.

The key advantage of vulnerability scanning is its ability to be executed as often as required. This means that it can be leveraged to detect vulnerabilities much sooner.  This allows them to be quickly remediated and to reduce the time available for attackers to exploit them. For all the strengths of penetration testing, it is not feasible to perform it weekly. A vulnerability introduced one week after a penetration test may go unnoticed for up to a year until the next penetration test is performed. Vulnerability scanning can help plug this gap. It “fills in” between scheduled penetration tests to uncover many common vulnerabilities almost as soon as they are introduced. Thus reducing the risks to you, your business, and your customers.

Doesn’t running scans more often increase workload and require more resources?

It seems intuitive to assume that running vulnerability scanning more often must surely require more – and potentially unmanageable – amounts of resources, including time commitments from already overburdened security teams.

However, this is typically not the case. Vulnerability scans operate very differently to penetration tests. They are “configure once, run forever”. That is, the scanning itself is completely automated. Once a scan profile is created to define how a scan should be performed, there is no additional burden between executing the scan once versions automatically re-execute on a repeating schedule as often as required. This can all be delivered for no additional cost. Nor even requiring any manual action or intervention for subsequent scans.

To understand why this is the case, we will take a look at some of the most commonly seen benefits cited by customers who have adopted an approach in which vulnerability scanning is performed more frequently in order to fully leverage their benefits, and how this is made possible.

Reduction in Workload Volatility

Managing workload for a security team is made especially challenging, the greater the volatility in workload. Preventing those days where everything lands all at once is key in establishing a manageable cadence and rhythm for a team that ensures the delivery of consistent performance.

For vulnerability remediation, ask yourself whether you would prefer to receive:

  1. A vulnerability scan performed once a week. Each scan finding two new vulnerabilities, giving you two vulnerabilities to remediate each week; or
  2. A single vulnerability scan performed in the second week of March. Delivering a flood of over one hundred vulnerabilities in one mammoth batch that seems overwhelming.

Hopefully you’re thinking “Option 1”. Performing regular vulnerability scanning at a higher cadence means that vulnerabilities are discovered more often but in far smaller numbers: that is the “delta” or difference from one scan to the next and is lower the more often that scans are run.

Business as Usual

Tasks become easier through repetition and familiarity. Performing processes more often makes them not only standard practice, but improves performance in the tasks. Where vulnerabilities need to be remediated by other teams, making vulnerability remediation a standard “business as usual” activity on an ongoing basis, ensures that managers can budget for it. For example, assigning say 5% of a team’s time to vulnerability remediation on an ongoing basis. This is a far more palatable approach for those managing technical teams than having a huge set of vulnerabilities “drop” on their team in one, unmanageable package. This would require completely derailing other delivery commitments in order to address. It is far better that vulnerabilities that need to be remediated don’t drop on teams in one unmanageable “lump” of work. It can help ensure that delivery timescales for other work are not impacted, prevent frustration, and foster co-operation between teams.

“Little and often” makes the vulnerability management process business as usual. Rather than an extraordinary demand for resources on an irregular basis. Vulnerability management becomes part of the status quo and providing regular vulnerability reports – from frequent scans – each with a small delta to the last, helps everyone.

Reducing the Attack Window Reduces Risk

When a new vulnerability is reported, it triggers a race against the clock between the various actors involved. From an organisation’s point of view, teams need to roll-out the necessary security patches to rectify the flaw as soon as the vendor supplies them. However, at the same time, attackers will start developing exploits with malicious code that can take advantage of the identified weaknesses. The race is on, and the period until you patch is known as the “attack window” during which an attacker can take advantage of the vulnerability on your systems. If you are only performing vulnerability scanning on a long interval between scans, it may be months before you are even aware that one of your systems is un-patched and vulnerable.  This gives attackers greater opportunity to target you for attack.

Scanning on a more regular basis doesn’t find more vulnerabilities or present a greater burden.  What is does do, is reduce the timescales or “attack window” between a vulnerability being exposed on your system and you becoming aware of it and patching it. It tips the scales in your favour, and against the attacker.

Alignment with Agile Development Processes

Systems development used to be a slow process with long development cycles. However, the advent and adoption of approaches such as Devops and Agile practices within organisations often means that development teams are using Continuous Deployment and other mechanisms to deliver multiple code deployments per day.

A key advantage of a vulnerability scanner is that since it is an automated tool it can be trivially integrated into DevOps and CI/CD pipelines. It can then execute scans of test and staging environments as frequently as on every code deploy, allowing vulnerabilities to be detected and remediated before they even make it to production.

Configuration Regressions

In contrast to approaches that are based on “static analysis” of source code (known as SAST), vulnerability scanning is conducted by performing active scans of live copies of running applications. Interacting with them directly in exactly the same way as a customer (or attacker) does.

One of the key advantages to this approach is that the scanner doesn’t care *which* part of the application its scanning is vulnerable, only that a vulnerability exists. Whereas SAST can only detect vulnerabilities in source code, a vulnerability scanner such as AppCheck can detect vulnerabilities wherever they exist.  This includes in underlying configuration errors on the host that the application is running, as well as server software and network components such as web application firewalls, routers, and load balancers.

Scanning your entire web infrastructure regularly ensures that any misconfigurations introduced in system or services are detected swiftly, just as vulnerabilities in code are.

Expiring Resources

It’s possible for new vulnerabilities to appear even when nothing has been deliberately changed, and when no new code has been deployed. This can occur either when a new vulnerability is discovered or published in existing software that is in service.  Or when the behaviour of a given resource changes, even though no explicit change action has been performed, allowing vulnerabilities to be introduced. (SSL certificates can expire or be revoked, domain registrations can expire, permitting domain takeover, and products can go End of Life (EOL) and cease to receive ongoing critical security updates or advisories).

Performing scanning regularly ensures a greater chance that these issues are detected early. Even when an organisation may believe that no new vulnerabilities can have been introduced because no new code has been deployed.

Forensics & Exploit Detection

Vulnerability scanning typically performs a vulnerability identification function.  It aims to detect vulnerabilities in exposed systems and services that are present due to weaknesses in software code or configuration. This is so that they can be remediated before an attacker can exploit them. However, whilst not being their primary function, vulnerability scans can provide a useful secondary control to detect what are known as “indicators of compromise”. Detecting exploits by attackers that have already occurred. This might present as new and unexpectedly open ports or services on your systems, or potential malware presence that may represent a breach in progress.

Cybercriminals spend an average of 191 days inside a corporate network before they are detected, according to a 2018 IBM research report. During that time they can attempt to compromise an increasing number of systems and exfiltrate large amounts of data. Faster reaction to breaches limits the potential harm to your organisation and its customers. Scanning more frequently can let you spot signs of potential exploit earlier.

Included Dynamic Third-Party Code

It is increasingly common for web applications to include third-party client-side JavaScript libraries within their applications. The use of third-party JavaScript can be beneficial in delivering time savings for developers. It allows them to leverage common functionality in an easy and standardised manner without having to devote time to develop the functionality themselves.

Many of these libraries are dynamically loaded by websites from remote servers or cloud platforms. Whilst generally safe, any compromise of these third-party libraries means that any and all websites making use of them may be open to compromise. Because the JavaScript or other libraries loaded in this manner are typically called directly from a CDN server, it will often have received no review by the organisation. The organisation controls only the call to load the library from a given URL, but has no control over the content returned. This can change immediately without the organisation having any visibility. Frequent vulnerability scanning ensures that this risk is reduced by detecting and flagging dangerous third-party JavaScript or other libraries as early as possible.

Retesting After Fixing

An advantage of vulnerability scanning is that it allows easy rescanning to be performed after a vulnerability has been fixed.  This verifies and provides assurance that the claimed fix has in fact been effective in remediating the vulnerability. In 2017 Equifax experienced a major data breach involving the theft of sensitive data relating to 145 million customers. Subsequent investigations uncovered indications that Equifax staff were aware of the requirement to patch their systems against a known and published vulnerability. However they failed to adequately retest systems after applying fixes in order to ensure that all affected systems had been remediation fully and effectively.

If an organisation only performs a scan annually, or quarterly, it may potentially be failing to follow up on verification that vulnerabilities discovered in previous scans, and which were believed to have been remediated are genuinely resolved and present no further risk.


The bottom line is that performing regular vulnerability scans – perhaps more often than you might have previously considered appropriate – provides a consistent visibility into your vulnerability landscape. It can provide the basis for a consistent and manageable workload and rhythm for your team. At the same time as reducing risk for your customers by minimising the duration of attack windows and reducing the chance of potential exploit.

Check out AppCheck for more information.