- August 30, 2023
- 6 min read
Threat Models: Accidental Cloud Misconfiguration
If you haven't already, check out our blog 'Threat Models: Malicious Insider' for an introduction to the threat model series.
In the third part of the threat model series, we will cover threats arising from accidental cloud misconfiguration. If you’re just joining us, we strongly recommend you read the first part of the series, which introduces the concept of threat modeling and defines the relevant key terms and tech stack for the series.
Case Study Context:
Our case study considers an internet-facing application handling sensitive health and credit card information. The application is a typical three-layer architecture with web, application, and data storage layers and is hosted in a public cloud across
Understanding the Cloud Threat:
Accidental misconfiguration is one of the most common causes of data breaches. The 2023 Verizon DBIR and IBM Cost of a Data Breach reports say this threat makes up 7 - 15% of data breaches.
Root Causes of Misconfigurations:
Breaches in this category are often attributed to:
- Lack of Familiarity: Often arising from inadequate familiarity with a particular technology, leading to poorly executed configurations
- Click-Ops: Inconsistent changes due to a lack of template-based processes pose risks, particularly when adopting new cloud services.
- Complexity: Cloud implementation's intricacies can lead to inadvertently overlooking security settings.
- Misconfiguration Visibility: A lack of visibility into misconfigurations can lead to gaps in best practices and security protocols.
- Misinterpretation of Settings: Misunderstanding configuration settings can create a false sense of security.
It's important to note that this threat model focuses on accidental misconfigurations rather than malicious intent.
Threat Actors and Vulnerabilities:
In the context of accidental cloud misconfigurations, the primary actors involve System/Cloud Admins and Developers/Engineers. The vulnerabilities that open the door to breaches include:
- Excessive IAM Privileges: IAM policies are complex and can easily be implemented with overly permissive privileges
- Exposed Data Storage: Data storage repositories and databases can inadvertently be made accessible over the internet.
- Insecure Data Storage: Data is stored in plaintext
- Limited Encryption: Data storage leverages default encryption, which only mitigates theft of physical disk, not application layer access, as with S3 Server Side Encryption and default RDS Encryption.
- Suboptimal Authentication: Poorly configured authentication methods are susceptible to phishing, social engineering, and brute-force attacks.
Attack Vectors:
- Internet-accessible interfaces provide access to data and functionality.
Effective Mitigations:
To effectively counter the accidental cloud misconfiguration threat, consider implementing the following robust mitigation strategies:
- Data Encryption: Encrypt all sensitive data at rest and in transit at the field level, using tools from companies like Evervault. The data should only be decrypted and available to specific applications and authorized roles.
- Understand Encryption at rest: Be diligent about understanding what encryption at rest really provides “out of the box”; this is typically a tick box exercise that offers little more than the protection of the physical disk. If you have the technical sophistication, process diligence, and risk appetite, you can implement your encryption program and associated key management processes.
- Cloud Security Posture Management (CSPM): Implement a CSPM solution and monitoring/response procedures to ensure complete cloud inventory and configuration visibility. The latter is critical; otherwise, expensive tooling and high-fidelity information will be wasted.
It could be argued that native tooling has the best visibility and speed to adapt to new CSP configurations and services, such as AWS Guard Duty or MS Defender for Cloud. However, external third parties like Wiz and Qualys also bring value to the space with multi-cloud support.
Many of these tools have basic alerting that should be monitored and alerted on:- Data stores are exposed directly to the internet
- Instances / Compute directly exposed to the internet
- High Risk / Commonly Attacked ports directly exposed to the internet (RDP, SSH, FTP, HTTP)
- Systems with known exploitable vulnerabilities
- Overly Permissive IAM Permissions
- Deprecated / overly permissive Network Security Rules
- Evidence of a compromised system
- Be Consistent. Implementing consistent ways of working allows organizations to more easily recover from an incident when a change can be fully understood quickly and rolled back. This is possible by standardizing the CI/CD process and having consistent approval and rollback processes for production changes to product software or infrastructure as code. Having individual users making unmanaged changes in isolation to production environments invites disaster.
- Logging: Implement sufficiently detailed logging to allow for the reconstruction of the events that led to a breach. Native tooling from AWS, like Cloudwatch and Cloud Trails, is an excellent place to start ingesting infrastructure and application logs.
- Monitoring: Implement monitoring to identify unusual patterns of behavior and notify security teams. There are many SIEMs on the market to enable deep visibility into user behavior, including tools like GuardDuty from AWS.
- Data Loss Prevention (DLP): Network and Endpoint DLP solutions can help organizations identify and block egress of sensitive leaving an organization. End point has become even more important given the switch to remote work.
- Policies: Document policies and procedures that cover best practices, such as those in PCI DSS, ISO 27001, or NIST, and publish them to staff.
- Develop an incident response plan and practice it to ensure that relevant roles understand their responsibilities in the event of an incident. There are excellent AWS-published playbooks for AWS environments that can be referenced as a start for specific IR scenarios.
- Access Control / Authorization: Implement strict access control policies. Using a least privilege policy will limit users to only the resources necessary for their job role; this should include data and functionality. Implementation will depend on the way the application handles authentication and authorization.
- Two-Factor Authentication (2FA): Implement 2FA to validate the identity of the staff with privileged access. Ideally, hardware keys with Fido U2F, such as those available from Yubico.
Cloud Security is one of the highest growing areas in the security domain. The speed at which internal development and engineering teams are iterating and releasing makes the challenge of staying on top of cloud configuration tough. At Evervault, we rely on a mindset of asking ourselves - “Is the data secure?”.
We know that’s what the attackers are after, and if we can encrypt data and prevent both the data and keys from falling into criminal hands, we know this solves a big part of the problem. Mastery over cloud security is an ongoing journey that requires dedication, adaptability, and a keen understanding of the evolving threat landscape.
Head of Compliance