Forget shadowy attackers deploying bespoke zero-day exploits from afar. A risk that is far more real for organizations as they embark on ambitious digital transformation projects is human error. In fact, “miscellaneous errors” accounted for 17% of data breaches last year, according to Verizon. When it comes to the cloud, there’s one particular trend that stands out above all others: misconfiguration. It’s responsible for the leaks of billions of records every year and remains a major threat to corporate security, reputation and bottom line.
Mitigating this persistent human-shaped threat will require organizations to focus on gaining better visibility and control of their cloud environments – using automated tooling where possible.
How bad are cloud data leaks?
Digital transformation saved many organizations during the pandemic. And now it’s seen as the key to driving success as they exit the global economic crisis. Cloud investments sit at the heart of these projects – supporting applications and business processes designed to power new customer experiences and operational efficiencies. According to Gartner, global spending on public cloud services is forecast to grow 18.4% in 2021 to total nearly $305 billion, and then increase by a further 19% next year.
However, this opens the door to human error – as misconfigurations expose sensitive data to malicious actors. Sometimes these records contain personally identifiable information (PII), such as the leak affecting millions at a Spanish developer of hotel reservation software last year. However, sometimes it’s arguably even more sensitive. Just last month it emerged that a classified US terrorist watchlist had been exposed to the public internet.
READ NEXT: Five tips for keeping your database secure
The bad news for organizations is that threat actors are increasingly scanning for these exposed databases. In the past, they’ve been wiped and held to ransom, and even targeted with digital web skimming code.
The scale of these leaks is astonishing: an IBM study from last year found that over 85% of the 8.5 billion breached records reported in 2019 were due to misconfigured cloud servers and other improperly configured systems. That’s up from less than half in 2018. The figure is likely to keep on rising until organizations take action.
What’s the problem?
Gartner predicted that by 2020, 95% of cloud security incidents would be the customer’s fault. So who’s to blame? It boils down to a number of factors, including a lack of oversight, poor awareness of policies, an absence of continuous monitoring, and too many cloud APIs and systems to manage. The last is particularly acute as organizations invest in multiple hybrid cloud environments. Estimates suggest that 92% of enterprises today have a multi-cloud strategy, while 82% have a hybrid cloud strategy ramping up complexity.
Cloud misconfigurations can take many forms, including:
- A lack of access restrictions. This includes the common issue of public access to AWS S3 storage buckets, which could allow remote attackers to access data and write to cloud accounts.
- Overly permissive security group policies. This could include making AWS EC2 servers accessible from the internet via SSH port 22, enabling remote attacks.
- A lack of permissions controls. Failure to limit users and accounts to least privilege can expose the organization to greater risk.
- Misunderstood internet connectivity paths
- Misconfigured virtualized network functions
Shadow IT can also increase the chances of the above happening, as IT will not know whether cloud systems have been configured correctly or not.
How to fix cloud misconfiguration
The key for organizations is to automatically find and fix any issues as quickly as possible. Yet they’re failing. According to one report, an attacker can detect misconfigurations within 10 minutes, but only 10% of organizations are remediating these issues within that time. In fact, half (45%) of organizations are fixing misconfigurations anywhere between one hour and one week later.
So what can be done to improve things? The first step is understanding the shared responsibility model for cloud security. This denotes which tasks the cloud service provider (CSP) will take care of and what falls under the remit of the customer. While CSPs are responsible for security of the cloud (hardware, software, networking and other infrastructure), customers must take on security in the cloud, which includes configuration of their assets.
Once this is established, here are a few best practice tips:
Limit permissions: Apply principle of least privilege to users and cloud accounts, thereby minimizing risk exposure.
Encrypt data: Apply strong encryption to business-critical or highly regulated data to mitigate the impact of a leak.
Check for compliance before provisioning: Prioritize infrastructure-as-code and automate policy configuration checks as early as possible in the development lifecycle.
Continuously audit: Cloud resources are notoriously ephemeral and changeable, while compliance requirements will also evolve over time. That makes continuous configuration checks against policy essential. Consider Cloud Security Posture Management (CSPM) tools to automate and simplify this process.
With the right strategy in place, you’ll be able to manage cloud security risk more effectively and free up staff to be more productive elsewhere. As threat actors get better at finding exposed cloud data, there’s no time to waste.