As part of any security awareness training we cover passwords. We teach users how to choose secure passwords, with the right length and characters, pass phrases etc. However, the average person has to log on to over 170+ sites/services and usually only have 3 to 19 passwords. That means there are a lot of weak/shared passwords in use & some of these will be by your staff.
Therefore, not only our partner, KnowBe4, but also the National Cyber Security Centre strongly recommend you use a Password Manager, take a look here to see why.
This is in order to effectively reduce password reuse and improve complexity. But you may be wondering if it’s really worth the risk.
Is it safe to store all of your passwords in one place? Can cybercriminals hack them? Are password managers a single point of failure? Take a look at this on-demand webinar by Roger A. Grimes, KnowBe4’s Data-Driven Defence Evangelist, where he walks you through these questions and more. He also shares a new password manager hacking demo from Kevin Mitnick, KnowBe4’s Chief Hacking Officer, that will reveal the real risks of weak passwords.
Password hygiene should be part of your security culture, from the onboarding process right up to the board.
Check out KnowBe4 for more information about effective, new school, security awareness training that successfully changes users’ behaviour.
Factors to Consider When Selecting a Reliable Password Manager
With many password managers available, finding the right solution can be quite challenging. Look out for some of these password manager features to know you’ve selected the right one.
1. Zero-Trust Security – enforces strict user authentication and least-privilege access. It restricts user access to resources that are necessary for the successful completion of tasks in a given role. This ensures that only legitimate users have access to your systems throughout the digital process to greatly reduce your organisational risk.
2. Regulation Compliance – Here are some standards your password manager should comply with:
Federal Risk and Authorization Management Program (FedRAMP). Although this is mainly for government, a password manager that complies with FedRAMP ensures more security controls.
General Data Protection Regulation (GDPR). A password manager in compliance with GDPR is likely handling your data appropriately.
Payment Card Industry Data Security Standard (PCI DSS). This regulation sets requirements to guarantee the security of payment processors when handling your debit or credit cards.
3. Compatibility with Your Systems and Software
4. Encryption – A password vault is the part of a password manager that actually stores the passwords for multiple applications. Password managers must have encryption, which scrambles credentials and makes them unreadable by attackers. Also, providers must store your password in its encrypted form as this makes them unable to access your credentials as well.
5. Automation (Browser Extensions Should Work Automatically)
6. Password Generators
7. Multi-Factor Authentication (MFA) – According to research by Microsoft, MFA can prevent 99.9% of account compromise attacks. A reliable password manager should require 2FA or MFA in addition to your master password before providing access to your account.
Need a Password Manager? Consider our partner Keeper. Keeper is an easy-to-use password manager that is built with a proprietary zero-trust architecture and end-to-end encryption to secure your credentials.
Get in touch with us for a free trial email@example.com, 01628 362 784.
The newly published Email Security Risk Report reveals that 99% of Cybersecurity leaders are stressed about email security. Plus 93% of organisations experiencing security incidents in the last 12 months. It is easy to see why.
For housing associations, the risk email poses to sensitive data is pervasive. They operate a complex infrastructure environment, and need to ensure employees are appropriately and effectively protected. This includes whether they are working in the office, in the field, or at home.
Housing associations are a prime target for phishing attacks, with this risk continuing to grow year on year. Cybercriminals not only want access to housing associations’ systems and data. They also want to use compromised mailboxes to launch further phishing attacks that target the wider supply chain.
Additionally, high volumes of emails are sent and received by a central core of employees who need to communicate with tenants and suppliers who are spread out across the region. This significantly increases the surface area for human error and accidental data loss.
In this blog, we look at five ways housing associations can reduce their email security risks. This includes detecting and preventing inbound threats and outbound data loss in Microsoft 365, while improving employees’ security behaviour.
1. Treat inbound and outbound email security as two parts of a single problem
71%of Cybersecurityleaders consider inbound and outbound email security to be a single problem they need to solve.
Credential harvesting is one of the primary motivations behind phishing attacks. Analysis of platform data provided by our partner Egress reveals that for a housing association of approximately 1,200 employees:
18% of phishing emails contained malicious hyperlinks as their payloads, a common tactic to harvest credentials
29% of attacks targeting the organisation were sent from compromised legitimate email addresses, including supply chain accounts
Research also shows that 85% of account takeover attacks start with a phishing email. Consequently, there is a perpetual cycle where an inbound phishing attack leads to compromised accounts used in outbound attacks.
Additionally, treating email security holistically by preventing inbound and outbound threats once and together enables housing associations to provide a streamlined experience for both employees and administrators. End-users benefit from one consistent experience. Plus they receive ongoing education across a broad spectrum of threats (see below for more on real-time teachable moments). At the same time, administrators can benefit from analytics insights from across their environment. Gathering all threats within a single console in a way that enables them to prioritise and manage their responses.
2. Understand the risks in your environment
As the Email Security Risk Report shows, despite implementing native security controls in Microsoft 365 and secure email gateways (SEGs), threats continue to get through.
Cybercriminals are aware that the signature-based detection used in these technologies is effective in identifying known threats. So they continue to innovate. This includes zero-day and emerging attacks, that are not identified for the signature-based detection to pick up on. As well as increasing the use of social engineering so there is not a payload to detect. Additionally, attacks sent from legitimate but compromised accounts can also bypass this detection.
Platform data from Egress reveals that for the housing association of ca 1,200 employees, 38% of the phishing attacks targeting their organisation in a 40-day period got through their existing Microsoft 365 defences.
On the outbound, static rules cannot scale to accommodate the flexible way housing associations need to use email to communicate. Therefore, they have limitations in their effectiveness to prevent security incidents caused by human error. (E.g. adding the wrong recipient or attaching the wrong file, or forgetting to use Bcc). As well as intentional data exfiltration. (Which can happen both maliciously and with the best of intentions. For example, sending data to a personal device to work on or print at home).
Platform data from Egress analysing the emails sent from a housing association of approximately 600 employees highlighted that 94% of incidents detected were caused by human error and data loss prevention (DLP) policy violations. While only 2% of incidents involved malicious exfiltration (data taken over a 60-day period).
As a result of these risks, housing associations need to examine and invest in technology that can fill the gaps in email security.
3. Understand the intelligent email security can detect advanced threats
The email security market has innovated to fill these gaps. New integrated cloud email security (ICES)solutions have come to market that use AI technologies to detect advanced email security threats.
The solutions use techniques such as natural language processing (NLP) and natural language understanding (NLU) combined with other detection methodologies. They identify and neutralise advanced phishing attacks. These include business email compromise and impersonation attacks, invoice and payment fraud, attacks that rely on social engineering, and those sent from compromised supply chain accounts.
For outbound detection, solutions combine machine learning with social graph technology to deeply understand how each individual employee uses email and to identify abnormal behaviour. As a result, these solutions are highly scalable and reduce end-user friction by only prompting when a genuine risk is detected. Such as an incorrect recipient being added to an email or the wrong document being attached.
4. Use real-time teachable moments to ‘nudge’ employees away from risk and change behaviour for the long term
Newer, intelligent solutions can provide real-time warning to end-users at the moment when they need it most – as a risk is detected.
Phishing emails can be neutralised by the software and delivered with warning banners into the inbox. The employee cannot interact with dangerous content but is provided with clear explanations of the risk that has been detected using real phishing attacks as examples.
On the outbound, only prompting end-users when genuine risk is detected and with a clear explanation of why they have made a mistake (rather than a static prompt that is triggered for every email without changing its message) adds value to their day-to-day lives, without creating friction.
This approach for both inbound and outbound real-time teachable moments is proven to be more effective than static, unchanging warnings. Plus it augments security awareness and training (SA&T) programmes through ongoing education.
5. Reduce the burden on administrators
As mentioned above, a holistic approach to inbound and outbound email security enables administrators to gain a single view of the risks in their environment. They can therefore act more effectively. Additionally, when introducing technologies to your organisation, ensure they do not create layers of administrative complexity. This includes reducing the number of solutions that quarantine emails for administrators to review. While also use self-learning technology to prevent data loss versus maintaining sprawling libraries for static rules.
Ready to level up your email security?
The S3 team would be delighted to meet with you to discuss your current email environment and technology stack, and the risks to your organisation. Get in touch today to book your no-strings-attached discussion: 01628 362 784, firstname.lastname@example.org.
In their 2021 Market Guide for Email Security, industry analyst Gartner introduced the acronym ‘ICES’, which stands for integrated cloud email security. They also predicted that these platforms would make up 20% of anti-phishing solutions by 2025, up from 5% in 2021. You might also see the acronym ‘CAPES’ used to describe these platforms as well. This was coined by industry analysts Forrester. It stands for ‘cloud-native API-enabled email security’. Since their definition mostly agrees with Gartner’s, we’ll use ICES throughout this article to describe these solutions, explaining their origin, capabilities, and the reasons you need one.
The history of ICES
Earlier Gartner Market Guides referred to cloud email security supplements (CESS) and integrated email security services (IESS). In the 2021 guide, they merged these categories for three reasons:
Proliferation of advanced phishing attacks. – Historically, phishing emails concealed malware in attachments, which was then downloaded from servers linked in the email. Today, however, cybercriminals have evolved their attacks. They are increasingly sending payload-less phishing emails and attacks containing URLs that link to seemingly innocent material. However, they are tailored to harvest credentials for future attacks. These emails are managing to get through existing email security and, therefore, a new solution was required.
Intelligent detection capabilities were developed. –Intelligent detection was brought to market as a result of advancements in machine learning, social graphs, and linguistic analysis. This then made it easier to identify advanced phishing attacks.
Adoption of Microsoft 365. – Cloud email platforms make it possible to deploy email security solutions that conduct post-delivery inspection of emails and threat remediation.
Hence we are seeing an accelerating increase in the adoption of ICES solutions.
Easy deployment for an additional layer of security
ICES systems are not intended to replace current email security. Rather, they are meant to supplement it and address the use cases that it cannot address. As a result, they coexist with already available secure email gateways (SEG). Such as the built-in security offered by Microsoft 365.
ICES security can also be set up in a matter of minutes. There’s also no need to change the domain name services mail exchanger (DNS MX) record.
There are two popular deployment techniques for ICES solutions, and both can be used with just a few clicks:
Utilise Microsoft GraphAPI to retrieve emails from the inbox post-delivery. Then examine them. If a phishing email is discovered, either quarantine the email, or add a warning banner before returning it to the inbox. If no threat is detected, the email is sent back to the inbox in its original format.
Use mail flow rules in Microsoft 365 to divert emails to the ICES platform for inspection. If a phishing email is detected, either quarantine it or add a warning banner before sending it to the inbox. Again, if no threat is detected, the email is sent to the inbox.
Regardless of how the solutions are deployed, both approaches allow for the use of GraphAPI to remediate emails that are delivered as legitimated emails but later discovered to be malicious.
It’s worth noting that some have criticised the first method for placing too much reliance on the Microsoft Graph API. This can throttle connections during periods of high volume. The effects of this are well-documented on Microsoft’s website. It can cause potentially harmful emails to stay in users’ inboxes for tens of seconds, if not minutes. During which time a user may fall victim to a phishing attack. A second limitation of this method is the ICES platform’s inability to recover emails that have been sent to devices that are using their default email clients rather than the Outlook app. Again, this causes the user to have access to potentially harmful emails on that device.
Consolidating around Microsoft
Gartner states that 75% of enterprises are adopting a ‘vendor consolidation’ strategy. Organisations are realising that they are underutilising a large portion of the capabilities they have already paid for. In particular, with their Microsoft E3 or E5 license.
ICES solutions enable organisations to achieve these consolidation goals. By enhancing Microsoft’s native email security they open up the possibility of removing their SEG.
ICES provide different functionality versus a SEG
As they use self-learning technologies, ICES vendors frequently describe their products as ‘intelligent’. This contrasts with SEGs’ usage of rules and signature-based policies. These require ongoing upkeep and upgrading by IT and security personnel.
ICES platforms offer three crucial capabilities:
Intelligent detection. – Three key detection technologies are used by the top ICES platforms.
Machine learning for behaviour-based security (understanding typical email behaviours and highlighting anomalies).
Social graph technology to learn the normal sender/recipient trust relationships and flag anomalies.
Linguistic analysis to detect social engineering attacks.
User engagement. – ICES platforms are designed to handle the grunt work. They must identify advanced and complex threats that have eluded other security measures. They are the final line of defence before a recipient is faced with a phishing email. Platforms do not necessarily quarantine questionable emails. Instead, they add a warning banner that is often colour-coded to indicate the level of suspicion. Many banners additionally include contextual information about the threat’s nature. Some of them even give users the option to click through for more details or to mark an email as malicious or safe. These real-time teachable moments reduce risk for the long-term and augment an organisation’s security awareness and training (SA&T) programme.
M-SOAR capabilities. –A security analyst must act swiftly. They need to analyse, contain, and eliminate any threat when a user reports an email as malicious. Or when a suspicious email is found through other channels. Leading ICES platforms achieve this through search and destroy capabilities. This surfaces all emails along with warnings about potential hazards or indicators of compromise (IOC). They frequently provide a visual of the original email. Additionally, they enable one-click remediation of all matching emails.
Going beyond ICES
Many organisations are looking to remediate risks beyond the threats that ICES can identify. Intelligent detection technologies are revolutionising outbound threat protection in a similar way to how they have changed incoming threat protection. When Gartner established ICES in the 2021 Market Guide, they also coined the phrase ‘email data protection’ (EDP).
EDP increases security against data breaches caused by human error. Human error can result in emails being sent to the wrong recipients, having the wrong attachments, having too many people in the ‘To’ field, and sending emails with critical information without encryption. It makes use of the same intelligent technologies as those previously mentioned. It comprehends typical sender and receiver actions, and alerts the sender when an abnormality is detected. The intention is to nudge the user at the point of risk by interfering with their regular process, similar to how ICES added warning flags.
ICES providers are striving to include EDP to their portfolios as organisations start to quantify this human activated risk that leads to data breaches. Few, though, can offer both incoming and outbound protection in its entirety.
Get in touch to learn more about Integrated Cloud Email Security (ICES) and EDP platforms and selecting and justifying the best solution for your needs. Email: email@example.com Tel: 01628 362 784
All You Need To Know About Patch Management And Why Automated Patch Management Will Simplify Your Sysadmin’s Life – by ANDRA ANDRIOAIE.
What is Patch Management?
Patch management if the process of distributing and applying updates to software. These patches are frequently required to fix bugs in the software known as vulnerabilities. It entails the acquisition, review and deployment of patches to an IT infrastructure. A patch is a piece of software code that enhances a programme that has already been installed; you may think of it as a ‘bandage’ that has been applied to software. Software developers produce a patch each time a security flaw or bug is found or the program’s functionality has to be improved. Patches can be applied to your whole infrastructure, including servers, routers, IoT devices, servers, software/operating systems and more.
Why Is Patch Management Important
Regular software patches improve the functionality, stability, and security of your system. Not to mention that recent years have seen a rise in system vulnerabilities. Take a look at PrintNightmare, which targeted Windows Spooler, or the 16-year-old flaw in the print drivers for HP, Samsung, and Xerox. Plus, you must recall the infamous WannaCry ransomware attack. That occurred as a result of unpatched systems that were misused by malicious hackers. Microsoft had released a security patch that addressed the vulnerability in Windows OS two months before the ransomware attack began. However, many organisations did not upgrade their systems in time, leaving them vulnerable and open to attack.
Security: patch management fixes vulnerabilities in your software and applications that are susceptible to cyber-attacks
System uptime: patch management ensures your software & applications are kept up to date & run smoothly
Compliance: Cyber insurance, cyber essentials and other regulatory bodies often demand that companies maintain a certain level of compliance and patch management is a necessary piece of adhering to these standards
Feature upgrades: patch management can go beyond software bug fixes, guaranteeing you have the most recent and efficient product with the latest features and functionality.
Benefits of Patch Management
Reducing the attack surface: applications & software may have various vulnerabilities a hacker could exploit. By patching them, a company is less exposed to cyberattacks or security breaches. Patching works as a prevention measure against many types of malware. All of which can spread fast throughout a network.
Enhanced functionality: as well as removing software flaws, patching can also improve features, and therefore enhance functionality.
Achieving compliance: the required level of conformity with different regulations is accomplished with satisfactory audit results. This also saves you from receiving unnecessary fines for not meeting compliance standards.
Increased productivity: patches can fix different errors and bugs and therefore increase system stability. This means users won’t waste unnecessary time from any system downtime, as they won’t come across system bugs or downtime every few days. Therefore, they will become more productive.
Spotting old software: if your software vendor is out of business or has another problem, a patch management solution can help you identify the software that has not received updates in a long time. You will then have more visibility and can replace it in a timely manner.
Risks of Not Patching Your Software
Your business is more exposed to cyberattacks – hackers can exploit any found vulnerability.
The financial impact of a cyber attack – Successful cyber attacks can cost a company millions, from downtime & recovery, to reputation. The cost of recovery will certainly exceed the cost of implementing an automated patch management solution.
Potential loss in productivity – you will be left behind with an outdated system. You can waste hours as you are left struggling to solve issues caused by not patching in due time.
You can be fined because of a lack of compliance.
Cyber attacks are on the rise and you have no control over this. However, you can have total control over the vulnerabilities within your organisation and you can control them efficiently. One of the causes of the biggest cyberattacks to date has been poor patch management. Patch management plays a significant role in effective organisational cyber security.
The Patch Management Process
Here are the key steps that must be followed to ensure a seamless & effective patch management process:
Make an IT asset inventory of all your current software solutions and devices. Check out who has what on their computers. Check out which software is out of date.
Categorise assets and patches according to priority and risk. An inventory will assist you in determining which applications are vulnerable. It will also determine their level of importance or sense of urgency. It will also show if the existing patches meet your software needs. Since patch management is part of vulnerability management, choose a patch management solution that will fit your needs. Ensure it targets the most vulnerable parts of your system. Find a tool that will search for available patches. Then analyse the results to identify what needs to be patched. Then apply the patches and monitor the process.
Test the patches – Before putting the patches into production, test them in a lab environment. Before deploying live, you should check if your software supports the vendor’s patches you want to use. A smart way to figure out if your network will actually support the patch you want to make is to use a testbed that replicates your production network. This step unfortunately has drawbacks as well. It will take time, consume resources from the business, and postpone the actual patching process.
Plan the release into production and put a patch management policy in place. A patch management policy should lay out specific guidelines to ensure your patching process operates correctly. It should schedule patches to be applied on time, and the patching results should be documented appropriately.
Deploy the patches into production. Even if your lab tests went without a hitch, there’s always a chance you could encounter issues in a production environment. When going live, it is recommended to deploy in small batches.
Assess and document the results. You should document and analyse every patching cycle. This will help you to continually improve and optimise the patching process.
Patch Management Best Practices
Creating the optimal patch management strategy starts with evaluating all the necessary steps involved.
1. Create an asset inventory – You should keep track of your systems’ configuration and you need to know which hardware and software your organisation has. Plus check which version of operating systems are currently in use.
2. Analyse the risk levels and assign priorities – You need to undertake a risk assessment. Investigate which ones of your systems are non-compliant, vulnerable, and therefore need to be patched urgently. Some software might need patches sooner than others. You need correct identification and prioritisation in place.
Setting priorities and identifying goals for patching are crucial steps in the patch management process. It’s crucial to identify the software that has to be patched and establish a plan in order to avoid confusion and enable auditing procedures.
Patch management is part of the whole process of vulnerability management. Vulnerability management detects bugs that might be system configuration flaws, open ports, or registry settings. Vulnerability management will find the vulnerabilities before patching. Therefore, these processes should complement each other and be used together. A tool that has them both will work better in terms of mitigation measures.
Critical vulnerabilities should be patched first.
3. Consolidate software versioning – Your software and OS versions should be standardised to improve patching speed, effectiveness, and stability.
4. Create a patch management policy – A structured and well-defined process will constantly protect your system against threats. Your patch management process should be done frequently rather than occasionally. Therefore, a clear schedule must be followed when applying patches in order to prevent mistakes.
For instance, a delay should occur between deployments if various group policies require patches. You should wait a certain amount of time before installing a patch for the next policy group, so that the first policy group patches have time to take effect.
5. Do not delay important security patches -The process of patch management should not be postponed for too long or too often. Certain patches with a high security risk should always be applied as soon as possible to avoid being left open to attack.
6. Test on a small sample before wide deployment – Although software patches should be implemented in a timely manner, it’s also wise not to rush your patching, without making sure that the patches suit your system. Doing so could cause you issues. Therefore, an important patch management best practice is to test.
Patch validity can depend on the vendor. As a result, you must ensure that you test your patches on a small number of devices first to evaluate how they perform. If everything goes smoothly, you can then apply them all at once. The reason being, new patches may also have bugs that haven’t yet been found. This way you will avoid damage to the rest of your estate.
7. Have a rollback plan – You should be able to restore your software/roll back to the previous setting as soon as possible. This is to ensure that, in case of errors or conflicts, you can restore as soon as possible and reduce downtime.
8. Automate the patching process – As everyone knows, patching manually can be extremely time consuming. An automated process will always be better in terms of speed and accuracy. In addition, it will also help by avoiding human error. Not only is it a great tool to help minimise the risk of malware infections. It also frees up your sysadmins so they can focus on other security related tasks. In addition, it enables you to gain full visibility inside your IT environment so you can diligently keep track of vulnerabilities and patches.
In order to effectively manage software updates, a good automated patch management solution is essential.
Maintaining the security, integrity, and accessibility of data and systems is crucial for every organisation. It should be as thorough as possible but also simple and fast. The more you keep on top of your patching requirements and update all your systems, the less likely your company will be compromised.
Patch management is crucial to ensuring strong organisational protection. However, by all means, it should not be viewed as the answer to solving all security issues, but as an essential layer of protection for your business. For example, alongside DNS filtering, Endpoint Antivirus & Firewall, and Privileged Access Management (PAM).
Zero-Trust is a security framework of products or services that removes inherent trust from your organisation. Instead it requires strong, regular authentication/authorisation of all devices and users, together with context & policy adherence. Zero-Trust Network Access (ZTNA) is a term coined by Gartner. It uses the concept of ‘Zero Trust’ in the control of access to the company’s resources at the network level. With the new model of remote working, instant access to applications, services and data at any location or time, Zero Trust ZTNA is the potential future of network security.
Cyberattacks aren’t just a direct threat to an organisation’s income and reputation. In fact, the threat to business continuity is just as concerning as the spectre of data loss.
Research points to the scale of the risk. In 2021, one in five mid-market businesses (21%) suffered a ransomware attack and subsequently paid the ransom. For each successful ransomware attack, businesses are subject to very real disruption, with essential files, systems, or devices locked away. Ransomware can stop workers from fixing the problem and continuing with business as usual. This is all while employees are blocked from accessing essential information or even the entire network.
No matter the sector, the impact of this kind of disruption can be serious. Whether it’s knowledge-based companies unable to access their email servers and interact with clients. Or utilities providers unable to log jobs and request parts, continuity breaches are no joke.
The Remote Risk
This isn’t a static problem: the scale and complexity of the threats involved are growing exponentially. The corporate boundaries that used to mark the line between ‘safe’ and ‘unsafe’ have dissolved. Work is no longer a place. It is an activity, and the pandemic has only accelerated that with the move to remote work in many industries.
That means defining what’s a safe network, device, or login and what isn’t is now much more complex. Keeping on top of security for hundreds or even thousands of individual users, all connecting via a whole range of setups, seriously increases the risk of a continuity-breaking attack.
Yet research indicates that over half (51%) of mid-market firms admit they have not purchased cybersecurity products that protect against threats for hybrid and remote workers. And 41% of organisations admit that future-proofing their cyber defences ‘needs development’. Therefore, security needs a fundamental rethink to deliver rapid and secure access across business ecosystems.
Much has been said about the end of the traditional perimeter and the need for organisations to adapt and develop a Zero Trust security stance in response. But what does this mean in practice?
In short, when it comes to providing secure access to network resources, a Zero Trust security model turns the old idea of ‘connect then authenticate’ on its head. Instead, it establishes a paradigm in which trust is consistently re-evaluated based on real-time behavioural data, not a single successful login. Think of it like those scenes in blockbuster movies where the heroes infiltrate the villain’s lair – one mistake and all the alarms in the building are blaring. Zero Trust is more nuanced than that, but the basic principle is the same. If something looks suspicious, stop it first and ask questions later. Don’t just let it keep walking around because it flashed the right badge on the way in.
‘Trust no one’ may seem like an extreme mantra, but in today’s cybersecurity landscape, it’s essential. Here are four key steps to guide you along your way in understanding and implementing a Zero Trust position.
1) Trust no-one
This is the cardinal rule for perimeter-less security. The aim is to achieve a Zero Trust position. This is to ensure that users, devices, and logins are continually assessed and re-evaluated before access is granted to corporate resources. Rather than operating a ‘one and done’ policy, a Zero Trust approach dictates that every attempt to access potentially confidential information or systems should be met with checks and balances.
2) Follow the user
For seamless Zero Trust, security needs to go where people go. It needs to flawlessly adapt to whatever device, network or location they are using. Rather than denying access to unrecognised devices or simply requesting a password, businesses need systems that can draw on more complex datasets to make context-aware decisions. In other words, they need…
3) Smarter Security
…which can fuse context and identity to understand what ‘normal’ looks like and autonomously responds to suspicious behaviour. Truly smart security systems can analyse data about: geolocation, time of day, speed of movement, (i.e. logging in from two locations without expending the time required to physically get from one to the other), speed of access (i.e. clicking through files faster than humanly possible), and more to correctly identify risky behaviour – and shut it down.
4) Future-proof your investments
Finally, it’s worth bearing in mind that an effective security system is never static. The demands you face will change, as will the needs of your workforce. Given the need to continuously iterate, it’s advisable to consider combining your network and security services in one place. This will provide rapid, secure business access right across an environment, and enable upgrades without having to laboriously integrate new point products with old ones.
A single platform that provides all your core security requirements in one place, is a key consideration for maintaining continuity. It gives you the intelligence and automation to protect an increasingly mobile workforce whatever the future holds.
Why scanning more often could deliver surprising benefits you may not have considered.
Can I just scan once per year, like with a penetration test?
Penetration tests are uniquely effective in uncovering highly complex vulnerabilities in web applications: those which may require detailed human awareness and context in order to detect. However, whilst irreplaceable, penetration tests can also be relatively expensive to deliver. This is because they require significant time investment by highly skilled human penetration testers. Because of the costs, many organisations may, understandably, conduct them only an annual or bi-annual basis. However, automated vulnerability scanning (also known as “DAST” or “Dynamic Application Security Testing”) operates on a very different paradigm. A common misunderstanding by those establishing a vulnerability scanning programme for the first time is to apply existing schedules for penetration testing to vulnerability scanning without alteration. It is certainly possible to only run a vulnerability scan once per annum. However, in doing so many of the benefits that the vulnerability scanning paradigm makes possible are left on the table.
How often should I run vulnerability scans?
When a vulnerability is introduced to a website or service, the clock starts ticking on a window of opportunity for attackers to exploit it before the organisation operating the service notices the vulnerability and remediates it. In cybersecurity this is known as the “attack window” for a vulnerability. The longer the attack window is open, the greater the opportunity for attackers and hence the greater the risk to the organisation.
The key advantage of vulnerability scanning is its ability to be executed as often as required. This means that it can be leveraged to detect vulnerabilities much sooner. This allows them to be quickly remediated and to reduce the time available for attackers to exploit them. For all the strengths of penetration testing, it is not feasible to perform it weekly. A vulnerability introduced one week after a penetration test may go unnoticed for up to a year until the next penetration test is performed. Vulnerability scanning can help plug this gap. It “fills in” between scheduled penetration tests to uncover many common vulnerabilities almost as soon as they are introduced. Thus reducing the risks to you, your business, and your customers.
Doesn’t running scans more often increase workload and require more resources?
It seems intuitive to assume that running vulnerability scanning more often must surely require more – and potentially unmanageable – amounts of resources, including time commitments from already overburdened security teams.
However, this is typically not the case. Vulnerability scans operate very differently to penetration tests. They are “configure once, run forever”. That is, the scanning itself is completely automated. Once a scan profile is created to define how a scan should be performed, there is no additional burden between executing the scan once versions automatically re-execute on a repeating schedule as often as required. This can all be delivered for no additional cost. Nor even requiring any manual action or intervention for subsequent scans.
To understand why this is the case, we will take a look at some of the most commonly seen benefits cited by customers who have adopted an approach in which vulnerability scanning is performed more frequently in order to fully leverage their benefits, and how this is made possible.
Reduction in Workload Volatility
Managing workload for a security team is made especially challenging, the greater the volatility in workload. Preventing those days where everything lands all at once is key in establishing a manageable cadence and rhythm for a team that ensures the delivery of consistent performance.
For vulnerability remediation, ask yourself whether you would prefer to receive:
A vulnerability scan performed once a week. Each scan finding two new vulnerabilities, giving you two vulnerabilities to remediate each week; or
A single vulnerability scan performed in the second week of March. Delivering a flood of over one hundred vulnerabilities in one mammoth batch that seems overwhelming.
Hopefully you’re thinking “Option 1”. Performing regular vulnerability scanning at a higher cadence means that vulnerabilities are discovered more often but in far smaller numbers: that is the “delta” or difference from one scan to the next and is lower the more often that scans are run.
Business as Usual
Tasks become easier through repetition and familiarity. Performing processes more often makes them not only standard practice, but improves performance in the tasks. Where vulnerabilities need to be remediated by other teams, making vulnerability remediation a standard “business as usual” activity on an ongoing basis, ensures that managers can budget for it. For example, assigning say 5% of a team’s time to vulnerability remediation on an ongoing basis. This is a far more palatable approach for those managing technical teams than having a huge set of vulnerabilities “drop” on their team in one, unmanageable package. This would require completely derailing other delivery commitments in order to address. It is far better that vulnerabilities that need to be remediated don’t drop on teams in one unmanageable “lump” of work. It can help ensure that delivery timescales for other work are not impacted, prevent frustration, and foster co-operation between teams.
“Little and often” makes the vulnerability management process business as usual. Rather than an extraordinary demand for resources on an irregular basis. Vulnerability management becomes part of the status quo and providing regular vulnerability reports – from frequent scans – each with a small delta to the last, helps everyone.
Reducing the Attack Window Reduces Risk
When a new vulnerability is reported, it triggers a race against the clock between the various actors involved. From an organisation’s point of view, teams need to roll-out the necessary security patches to rectify the flaw as soon as the vendor supplies them. However, at the same time, attackers will start developing exploits with malicious code that can take advantage of the identified weaknesses. The race is on, and the period until you patch is known as the “attack window” during which an attacker can take advantage of the vulnerability on your systems. If you are only performing vulnerability scanning on a long interval between scans, it may be months before you are even aware that one of your systems is un-patched and vulnerable. This gives attackers greater opportunity to target you for attack.
Scanning on a more regular basis doesn’t find more vulnerabilities or present a greater burden. What is does do, is reduce the timescales or “attack window” between a vulnerability being exposed on your system and you becoming aware of it and patching it. It tips the scales in your favour, and against the attacker.
Alignment with Agile Development Processes
Systems development used to be a slow process with long development cycles. However, the advent and adoption of approaches such as Devops and Agile practices within organisations often means that development teams are using Continuous Deployment and other mechanisms to deliver multiple code deployments per day.
A key advantage of a vulnerability scanner is that since it is an automated tool it can be trivially integrated into DevOps and CI/CD pipelines. It can then execute scans of test and staging environments as frequently as on every code deploy, allowing vulnerabilities to be detected and remediated before they even make it to production.
In contrast to approaches that are based on “static analysis” of source code (known as SAST), vulnerability scanning is conducted by performing active scans of live copies of running applications. Interacting with them directly in exactly the same way as a customer (or attacker) does.
One of the key advantages to this approach is that the scanner doesn’t care *which* part of the application its scanning is vulnerable, only that a vulnerability exists. Whereas SAST can only detect vulnerabilities in source code, a vulnerability scanner such as AppCheck can detect vulnerabilities wherever they exist. This includes in underlying configuration errors on the host that the application is running, as well as server software and network components such as web application firewalls, routers, and load balancers.
Scanning your entire web infrastructure regularly ensures that any misconfigurations introduced in system or services are detected swiftly, just as vulnerabilities in code are.
It’s possible for new vulnerabilities to appear even when nothing has been deliberately changed, and when no new code has been deployed. This can occur either when a new vulnerability is discovered or published in existing software that is in service. Or when the behaviour of a given resource changes, even though no explicit change action has been performed, allowing vulnerabilities to be introduced. (SSL certificates can expire or be revoked, domain registrations can expire, permitting domain takeover, and products can go End of Life (EOL) and cease to receive ongoing critical security updates or advisories).
Performing scanning regularly ensures a greater chance that these issues are detected early. Even when an organisation may believe that no new vulnerabilities can have been introduced because no new code has been deployed.
Forensics & Exploit Detection
Vulnerability scanning typically performs a vulnerability identification function. It aims to detect vulnerabilities in exposed systems and services that are present due to weaknesses in software code or configuration. This is so that they can be remediated before an attacker can exploit them. However, whilst not being their primary function, vulnerability scans can provide a useful secondary control to detect what are known as “indicators of compromise”. Detecting exploits by attackers that have already occurred. This might present as new and unexpectedly open ports or services on your systems, or potential malware presence that may represent a breach in progress.
Cybercriminals spend an average of 191 days inside a corporate network before they are detected, according to a 2018 IBM research report. During that time they can attempt to compromise an increasing number of systems and exfiltrate large amounts of data. Faster reaction to breaches limits the potential harm to your organisation and its customers. Scanning more frequently can let you spot signs of potential exploit earlier.
Included Dynamic Third-Party Code
Retesting After Fixing
An advantage of vulnerability scanning is that it allows easy rescanning to be performed after a vulnerability has been fixed. This verifies and provides assurance that the claimed fix has in fact been effective in remediating the vulnerability. In 2017 Equifax experienced a major data breach involving the theft of sensitive data relating to 145 million customers. Subsequent investigations uncovered indications that Equifax staff were aware of the requirement to patch their systems against a known and published vulnerability. However they failed to adequately retest systems after applying fixes in order to ensure that all affected systems had been remediation fully and effectively.
If an organisation only performs a scan annually, or quarterly, it may potentially be failing to follow up on verification that vulnerabilities discovered in previous scans, and which were believed to have been remediated are genuinely resolved and present no further risk.
The bottom line is that performing regular vulnerability scans – perhaps more often than you might have previously considered appropriate – provides a consistent visibility into your vulnerability landscape. It can provide the basis for a consistent and manageable workload and rhythm for your team. At the same time as reducing risk for your customers by minimising the duration of attack windows and reducing the chance of potential exploit.
One of the key challenges organisations are currently struggling with, or have seen, is an increase in Evasive Phishing. In addition, Impersonation Attacks and Business Email Compromise are also a problem. All of these are getting past traditional gateway and perimeter security solutions.
The sophistication of these attacks makes them increasingly successful in avoiding detection and fooling your employees. This includes those who’ve been through Security Awareness and Training (SAT) programs. Obviously this puts companies at significant financial risk from imposter attacks.
Therefore to secure against phishing attacks, consider another critical layer of security, where it’s needed – right in the user mailbox.
Cyren Inbox Security is software that connects into O365. It continuously monitors the inbox for phishing attacks that have been missed at the Secure Email Gateways (SEG).
It is an Inbox Detection & Response (IDR) solution that allows organisations to establish a critical layer of email security at the inbox. Thus strengthening your overall security posture.
It’s not a competitor to Secure Email Gateways but a complimentary solution. It helps to significantly improve the rate of phishing/malicious detection from emails that evade perimeter security and reside in the inbox.
Therefore, protecting your Office 365 mailboxes has never been this easy.
The solution takes less than 10 minutes to integrate and deploy. Cyren then automatically significantly remediates malicious mail. This reduces the time burden on internal teams.
One key feature is that during the POC stage Cyren will produce a free delta report detailing exactly what is being missed by the perimeter security.
When evasive phishing and other threats get past traditional security barriers, Cyren detects them and remediates automatically through:
Continuous monitoring of all emails in all folders in user mailboxes
Continuous scanning and real-time analysis of URLs and web pages
Ongoing analysis of email sender and recipient behaviour to detect anomalies and threat patterns
Front-line detection and reporting of new, emerging threats — powered by users
Cyren Inbox Security leverages the native API integration of Office 365. This means there is no requirement to change existing security gateways or appliances. It then continuously detects email threats that are delivered to user mailboxes. Their powerful set of automated remediation tools identify and mitigate a wide range of malicious attacks that avoid detection by perimeter defences, including:
Evasive Phishing attacks using techniques such as delayed URL activation, URLs hidden in attachments, HTML obfuscation, sophisticated encryption, real and valid SSL certificates, etc.
Spear phishing and spoofed messages that carry no payload to detect
BEC, CEO fraud, and other targeted social engineering attacks
New zero-day phishing campaigns
Account takeovers (credential theft) and monitoring of internal email
Quick Two-Step Deployment
Cyren Inbox Security is a non-intrusive security solution-as-a-service. It complements your existing secure email gateway without the need for MX record changes or any changes to current infrastructure. Get up and running in just a few clicks — simply:
1) Authorise Cyren to access your email flow, and then
2) Configure your preferred filtering and remediation policies, including flexibly applying different rules-based policies to different users and groups.
More than 1.3 billion users around the world rely on Cyren’s cloud security solutions to protect them against cyber-attacks every day. Powered by the world’s largest security cloud, Cyren delivers fast time-to-protection with embedded threat detection, threat intelligence and email security solutions.
With the increased adoption of Microsoft 365, many organisations assume that data backup is included in Microsoft 365. As a platform, it is secure. However, your data isn’t backed up in a way that you would require. Microsoft will not cover any data loss caused by your own internal errors. Nor from malicious actions, ransomware or any other cybercrime event.
Microsoft (and other Global SaaS vendors like Salesforce) don’t take any responsibility for your data. Nor how you use their application. They only take responsibility for their own infrastructure and operation. In other words, the main reasons for losing data in the cloud are not covered by Microsoft. Plus, it can take days to recover from a ransomware attack. Don’t take chances with the data your business relies on. You need secure & backup your data.
Microsoft themselves even recommend that you seek third-party backup and recovery for Office365 data. A good backup with quick recovery is critical after a ransomware attack or accidental deletion. Restoring sites, entire folders, or mailboxes can be a tedious, manual process. If the data you need is mission-critical, that means costly downtime you can’t afford.
Also, compliance is one of the key reasons behind the adoption of Microsoft 365 backup solutions. Typically, there’s only a 30 day retention period inbuilt into Microsoft 365. Plus, Microsoft SharePoint Online is only backed up every 12 hours. Added to that, that’s only within a 14 day retention period. Many Microsoft backup solutions are incredibly flexible. This means that you can keep your email data for as long as you need. Plus, you can tailor it to meet your business’s compliance needs. One of the key compliance requirements, including GDPR, is to ensure that you have constant availability of your data at all times.
Your Microsoft 365 Data is At Risk Without Backup.
A recent *Gartner report sums it up. ‘By 2022, 70% of organisations will have suffered a business disruption due to unrecoverable data loss in a SaaS application’. In a survey of 1000 IT Pros, 81% said they had experienced data loss in Office 365. This was caused by simple user error to major data security threats.
To avoid losing data, your Microsoft 365 needs backup. You need a third-party backup. One that stores the data at an independent location. It’s a data protection best-practice.
Complete Microsoft 365 Coverage
Barracuda and Keepit both have products that offer easy to use solutions. They both give you complete coverage across all workloads: Exchange, One Drive, Teams, SharePoint, Groups and Public Folders.
No On-Premise Installation
Your Microsoft 365 data is already in the cloud. Now that you’re in the cloud, it makes sense to avoid unnecessary on-premise installation for that backup. Saving secure, encrypted backups in the same network means better performance and instant scalability. Cloud backup means easy deployment and no on-premise installation. These two solutions are designed with perfect integration for Microsoft cloud services. Therefore, you can run backups in the background and not impact on how you use the platform. They offer fully automated backup with several daily backups. This ensures your backed-up data is always up to date.
Easy Find & Recover
It’s not just important to have data backup. It’s also vitally important to be able to recover that data and easily. Plus you want to recover it in a format that is easy to use and understand.
In any case of data loss, finding and recovering is as easy as it gets with Barracuda and Keepit. They have unique find & recover features. You may find and recover anything from a single email to a full user account recursively. Finding and recovering has never been quicker whilst helping you meet compliance demands.
Secured and Encrypted
As for security, Keepit will back up your data to the global data centres of your choice. Your data is stored on two separate physical locations with the latest encryption technology. This ensures your data is backed-up and secure. Barracuda retain three external copies of backed up data.
October is Cybersecurity Awareness Month, which is now in its 18th year. Its primary focus continues to help raise awareness about the importance of cybersecurity, ensuring everyone has the resources they need to be safer and more secure online.
The Themes this year are:
Be Cyber Smart
Fight The Phish
Explore, Experience, Share (Cybersecurity Career Awareness Week)
KnowBe4 Resource Kit
To help you make the most of Cybersecurity Awareness Month, KnowBe4have created a new free Resource Kit for this year. They are providing the resources you need to help your users defend against cybercrime from anywhere.
In today’s hybrid work environment, your users are more susceptible than ever to attacks like phishing & social engineering. Cybercriminals know this and are constantly changing tactics to exploit new vulnerabilities. Therefore, KnowBe4 have put together some resources so you can keep your users safe with security top of mind.
The kit includes:
Free resources, including their most popular on-demand webinar & whitepaper as well as Kevin Mitnick cybersecurity demo videos, infographics, tip sheets, awareness posters, and wallpapers
Cybersecurity Awareness Month Guide and Cybersecurity Awareness Weekly Planner to help you plan your activities
Two free training modules; “Your Role: Internet Security and You” and “2021 Social Engineering Red Flags,”
Everything is printable and available digitally, so they can be delivered to your users no matter where they are working from
What is the difference between UBA vs UEBA and how does it fit in with SIEM?
User and Entity Behaviour Analytics (UEBA) focuses on analysing activity. Specifically user behaviour, device usage, and security events within your network environment. It helps companies detect potential insider threats and compromised accounts. The concept has been around for some time. It was first defined in detail by Gartner in 2015 in its Market Guide for User and Entity Analytics.
How Does UEBA Work?
In essence, UEBA solutions create a baseline of standard behaviour for users and entities within a corporate network. Ultimately they look for deviations to the baseline. They alert network admins or security teams to anything that could indicate a potential security threat.
To do this, UEBA solutions collect live data that includes:
User actions. Such as applications used, interactions with data, keystrokes, mouse movement, and screenshots.
Activity on devices attached to the network. Such as servers, routers, and data repositories.
Security events from supported devices and platforms.
Advanced analytical methods are then applied to this data to model the baseline of activity. Once this baseline of behaviour has been established, the UEBA solution will continuously monitor behaviour on the network. Then it compares it to the established baseline. It looks for behaviour that extends beyond an established activity threshold to alert appropriate teams of the detected anomaly.
UBA vs UEBA and SIEM
Initially this technology was referred to simply as User Behaviour Analytics (UBA). As the name implies, this concept focused exclusively on activity at the user level. This was to indicate potential threats. However, Gartner later added the “entity”. This was to reflect the fact that “other entities besides users are often profiled in order to more accurately pinpoint threats”. Gartner defined these other entities as including managed and unmanaged endpoints, servers, and applications. This included everything that was cloud-based, mobile-based, or on-premise based.
This expanded scope then includes looking for any “suspicious” or anomalous activity that may be based on network traffic. Or requests sent from a specific endpoint to unusual ports or external IP addresses. It also looks at operating system process behaviour, privileged account activity on specific devices, the volume of information being accessed or altered, or the type of systems being accessed.
By broadening the scope of its focus to cover non-human processes and machine entities, Gartner’s UEBA definition means UEBA can analyse both sources of data. This helps to gain greater context and insight around activity. As a result it can produce a more accurate profile of the baseline of activity within an IT network.
Therefore, the solution is able to more accurately pinpoint anomalies and potential threats. This even includes things that would often have gone unnoticed by “traditional” security monitoring processes such as SIEM or DLP.
Do SIEM And UEBA Offer The Same Protection?
With many corporate security teams having already implemented security information and event management (SIEM) solutions, a common question is whether UEBA and SIEM offer the same protection. After all, they both collect security-related information that can indicate a potential or active threat.
UEBA solutions typically include the following benefits:
The ability to use behavioural baselining to accurately detect compromised user accounts.
Automation to create improved security efficiency.
The use of advanced behavioural analytics helps to reduce the attack surface by frequently updating IT security staff and network admins about any potential weak points within the network.
The key difference is that SIEM solutions are traditionally more focused on log and event data. These wouldn’t allow you to create a standard baseline of overall user and network environment behaviour in the same way that a UEBA-focused solution would. However, it’s important to note, that similar to UEBA solutions, this information gathered by SIEM solutions comes from a wide range of different IT network endpoints. It is then collated and analysed within a central system.
Sound familiar? It should; the line between UEBA and SIEM can be rather thin, depending on the collection and analysis capabilities of a given SIEM solution.
With the right input data, the SIEM solution can process the collected data and combine it with real-time event analysis. It can then present it in a format that helps provide security analysts and system administrators with actionable insights into anomalies that may indicate a threat.
The use of SIEM solutions is becoming increasingly widespread within the corporate landscape. This is because they do offer organisations a number of important benefits, these include:
Improved handling of cybersecurity incident and response.
Improved security defences.
The ability to automate compliance reporting to help organisations achieve compliance with the relevant regulations for their industry ie GDPR, HIPAA, and PCI DSS etc.
To be able to more accurately predict potential threats through user and entity activity, SIEM solutions need to both:
a) Be able to collect needed and relevant activity and behavioural data.
b) Plus have the ability to accurately analyse that data in the context of finding anomalous threat-related activity to produce more targeted and actionable alerting.
As you can see, there are some differences between the two solutions. However, SIEM solutions become a viable option in an organisation’s journey to implement UEBA as long as SIEM solutions can:
Be set up to comprehensively collect enough similar data to provide the same value as a traditional UEBA solution.
Plus provide the needed conclusive analysis to identify leading and active indicators of threat activity.
By Nick Cavalancia Microsoft Cloud and Datacenter MVP for AT&T.
AlienVault USM – UBA vs UEBA and SIEM
Traditional SIEM software solutions promise to provide what you need, but the path to get there is one that most of us can’t afford. Traditional SIEM solutions collect and analyse the data produced by other security tools and log sources, which can be expensive and complex to deploy and integrate. Plus, they require constant fine-tuning and rule writing.
AlienVault USM provides a different path. In addition to all the functionality of a world-class SIEM, AlienVault USM unifies the essential security capabilities needed for complete and effective threat detection, incident response, and compliance management—all in a single platform with no additional feature charges. Their focus on ease of use and rapid time to benefit makes the USM platform the perfect fit for organisations of all shapes and sizes. See here for more information.