Dataset Viewer
Auto-converted to Parquet
query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case "Ready" and require_secure_transport.value equal ignore case "ON" and (tls_version.value does not equal ignore case "TLSV1.2" and tls_version.value does not equal ignore case "TLSV1.3" and tls_version.value does not equal ignore case "TLSV1.2,TLSV1.3" and tls_version.value does not equal ignore case "TLSV1.3,TLSV1.2")```
Azure MySQL database flexible server using insecure TLS version This policy identifies Azure MySQL database flexible servers which are using insecure TLS version. Enforcing TLS connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. As a security best practice, it is recommended to use the latest TLS version for Azure MySQL database flexible server. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to Azure Database for MySQL flexible servers dashboard\n3. Click on the reported MySQL flexible server\n4. Click on 'Server parameters' under 'Settings'\n5. In the search box, type in 'require_secure_transport' and make sure VALUE is set to 'ON' if it is not already set.\n6. In the search box, type in 'tls_version' and Set VALUE to TLSV1.2 or above for tls_version..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-flexible-server' AND json.rule = properties.state equal ignore case Ready and properties.network.publicNetworkAccess equal ignore case Enabled and firewallRules[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists```
Azure PostgreSQL database flexible server configured with overly permissive network access This policy identifies Azure PostgreSQL database flexible servers that are configured with overly permissive network access. It is highly recommended to create PostgreSQL database flexible server with private access to help secure access to server via VNet Integration or with a Firewall rule, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If PostgreSQL database flexible server was intended to accesed by authorized public IPs. Restrict IP addresses to known list and make sure IP range '+ Add 0.0.0.0 - 255.255.255.255' is not in Firewall rules. \nTo add or to remove IPs refer below URL:\nhttps://docs.microsoft.com/en-gb/azure/postgresql/flexible-server/how-to-manage-firewall-portal#manage-existing-firewall-rules-through-the-azure-portal\n\nTo create new PostgreSQL database flexible server with Private access (VNet Integration), refer below URL:\nhttps://docs.microsoft.com/en-gb/azure/postgresql/flexible-server/quickstart-create-server-portal\n\nNote: Once PostgreSQL database flexible server is created; You can't change the connectivity method after. For example, if you select Public access (allowed IP addresses) when you create the server, you can't change to Private access (VNet Integration) after the server is created..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals Udp or protocol equals Icmp or protocol equals *) and ((destinationPortRange exists and destinationPortRange is not member of (20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500, 5900, *)) or (destinationPortRanges is not empty and destinationPortRanges[*] is not member of (20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500, 5900, *))) )] exists```
Azure Network Security Group allows all traffic on ports which are not commonly used This policy identifies Azure Network Security Group which allow all traffic on ports which are not commonly used. Ports excluded from this policy are 20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500 and 5900. As a best practice, restrict ports solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = encryptionConfig does not exist or (encryptionConfig exists and encryptionConfig[*].provider.keyArn does not exist and encryptionConfig[*].resources[*] does not contain secrets)```
AWS EKS cluster does not have secrets encryption enabled This policy identifies AWS EKS clusters that do not have secrets encryption enabled. AWS EKS cluster secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with direct access to etcd or with API access can retrieve or modify the secrets. Using secrets encryption for your Amazon EKS cluster allows you to protect sensitive information such as passwords and API keys using Kubernetes-native APIs. It is recommended to enable secrets encryption to ensure its security and reduce the risk of unauthorized access or data breaches. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable secrets encryption on existing Azure EKS clusters, follow the below URL:\nhttps://docs.aws.amazon.com/eks/latest/userguide/enable-kms.html.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'monitoringService does not exist or monitoringService equals none'```
GCP Kubernetes Engine Clusters have Cloud Monitoring disabled This policy identifies Kubernetes Engine Clusters which have disabled Cloud monitoring. Enabling Cloud monitoring will let the Kubernetes Engine to monitor signals and build operations in the clusters. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to 'Kubernetes Engine' (Left Panel)\n3. Select 'Clusters'\n4. From the list of clusters, click on the reported cluster\n5. Under 'Features', click on the edit button (pencil icon) in front of 'Cloud Monitoring'\n6. In the 'Edit Cloud Monitoring' dialog, enable the 'Enable Cloud Monitoring' checkbox\n7. Click on 'Save Changes'.
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = instanceId contains "[RantiAWS" ```
Chaitu EC2 instance policy This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with "gke-") and serviceAccounts[?any( email contains "[email protected]")] exists as Y; filter '$.Y.serviceAccounts[*].email contains $.X.user'; show Y;```
GCP VM instance configured with default service account This policy identifies GCP VM instances configured with the default service account. To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it is recommended to not use the default Compute Engine service account because it has the Editor role on the project. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Log in to GCP Console\n2. Navigate to 'Compute Engine' and click on 'VM instances'\n3. Search for the alerted instance and click on the instance name\n4. To make a change first we have to stop the instance; click on 'STOP' from the top menu\n5. Click on 'EDIT' and Go to section 'Service account' \n6. From the dropdown select a non-default service account \n7. Click on 'Save'\n8. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'type equals data and deleteWithInstance is true'```
Alibaba Cloud data disk is configured with release disk with instance feature This policy identifies data disks which are configured with release disk with instance feature. As a best practice, disable release disk with instance feature to prevent irreversible data loss from accidental or malicious operations. Note: This attribute applies to data disks only. However, it can only restrict the manual release operation, not the release operation by Alibaba Cloud. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Select the reported data disk\n5. Select More and click on Modify Disk Property\n6. On Modify Disk Property popup window, Uncheck 'Release Disk with Instance' checkbox\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```
Info of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.encryption.status equal ignore case disabled```
Azure Container Registry not encrypted with Customer Managed Key (CMK) This policy identifies Azure Container Registries that are not encrypted with Customer-Managed Keys (CMK). By default, Azure Container Registry encrypts data at rest with Microsoft-managed keys. However, for enhanced control, regulatory compliance, and improved security, customer-managed keys enable organizations to encrypt Azure Container Registry data using Azure Key Vault keys that they create, own, and manage. Using CMK ensures that the encryption process aligns with organizational policies, allowing complete control over key lifecycle management, including rotation, access management, and retirement. As a security best practice, it is recommended to encrypt Azure Container Registries with Customer-Managed Keys (CMK). This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: CMK can only be enabled during the creation of a new Container Registry. Ensure the registry is on the Premium service tier, as CMK is only supported at this level.\n\n1. Create a new Container Registry\n2. Navigate to the Encryption tab during the creation process\n3. Select the option to enable Customer-Managed Key\n4. Fill in all other required details to complete the registry setup.
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy akceq This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-services-list' AND json.rule = services[?any(name contains containerscanning.googleapis.com and state contains ENABLED)] does not exist```
GCP GCR Container Vulnerability Scanning is disabled This policy identifies GCP accounts where GCR Container Vulnerability Scanning is not enabled. GCR Container Analysis and other third party products allow images stored in GCR to be scanned for known vulnerabilities. Vulnerabilities in software packages can be exploited by hackers or malicious users to obtain unauthorized access to local cloud resources. It is recommended to enable vulnerability scanning for images stored in Google Container Registry. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. For the reported account, navigate to the GCP service 'Container Registry'(Left Panel)\n3. Select the tab 'Settings'\n4. To enable the vulnerability scanning, click on the 'TURN ON' button..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = state_description equal ignore case active and secret_type equal ignore case username_password and ( rotation.auto_rotate is false or (rotation.unit equal ignore case month and rotation.interval > 3) or (rotation.unit equal ignore case day and rotation.interval > 90))```
IBM Cloud Secrets Manager user credentials with rotation policy more than 90 days This policy identifies IBM Cloud Secrets Manager user credentials with a rotation policy of more than 90 days. IBM Cloud Secrets Manager allows you to securely store and manage user credentials (username and password) for accessing external services or applications. It provides a centralised way to store secrets, control their lifecycle, set expiration dates, and implement rotation policies. User credentials should be rotated to ensure that data cannot be accessed with an old password, which might have been lost, cracked, or stolen. It is recommended to establish a rotation policy for user credentials, ensuring that they are regularly rotated within a period of less than 90 days. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a rotation policy for user credentials, follow the below steps:\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides under the security section.\n3. Select the secret.\n4. Under the 'Rotation' tab, enable 'Automatic secret rotation'.\n5. Set 'Rotation Interval' to less than 90 days.\n6. Set 'General password settings' according to the requirements.\n6. Click on 'Update'..
```config from cloud.resource where api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or maxPasswordAge !isType Integer or $.maxPasswordAge > 90 or maxPasswordAge equals 0'```
AWS IAM password policy does not expire in 90 days This policy identifies the IAM policies which does not have password expiration set to 90 days. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Enable password expiration' and enter a password expiration period.\n4. Click on 'Apply password policy'.
```config from cloud.resource where api.name = 'azure-machine-learning-workspace' AND json.rule = 'properties.provisioningState equal ignore case Succeeded and properties.hbiWorkspace is true and properties.storageAccount exists' as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = 'totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)' as Y; filter '$.X.properties.storageAccount contains $.Y.id'; show Y;```
Azure Storage Account storing Machine Learning workspace high business impact data is publicly accessible This policy identifies Azure Storage Accounts storing Machine Learning workspace high business impact data that are publicly accessible. Azure Storage account stores machine learning artifacts such as job logs. By default, this storage account is used when you upload data to the workspace. The attacker could exploit publicly accessible storage account to get machine learning workspace high business impact data logs and could breach in to the system by leveraging data exposed. It is recommended to restrict storage account access to only to the machine learning services as per business requirement. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restirct Storage account access, refer the below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/blobs/anonymous-read-access-configure?tabs=portal.
```config from cloud.resource where api.name = 'aws-iam-list-groups' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Action equals * and Resource equals * )] exists as Y; filter "($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal * ) or ($.X.attachedPolicies[*].policyArn intersects $.Y.policyArn)"; show X;```
AWS IAM Groups with administrator access permissions This policy identifies AWS IAM groups which has administrator access permission set. This would allow all users under this group to have administrative privileges. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to IAM service\n3. Click on Groups\n4. Click on reported IAM group\n5. Under 'Managed Policies' click on 'Detach Policy' which is having excessive permissions and assign a limited permission policy as required for a particular group\nOR\n6. Under 'Inline Policies' click on 'Edit Policy' or 'Remove Policy' and assign a limited permission as required for a particular group.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-certificate' AND json.rule = '_DateTime.ageInDays(validToDate) > -1'```
AWS Database Migration Service (DMS) has expired certificates This policy identifies expired certificates that are in AWS Database Migration Service (DMS). AWS Database Migration Service (DMS) Certificate service is the preferred tool to provision, manage, and deploy your DMS endpoint certificates. As a best practice, it is recommended to delete expired certificates. For more details: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.SSL.ManagingCerts This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to 'AWS DMS' service\n4. Click on 'Certificates', Choose the reported certificate\n5. Make sure the reported certificate is already out of date from the 'Valid to' field\n6. Click on 'Delete', to delete the expired certificate..
```config from cloud.resource where api.name = 'gcloud-container-describe-clusters' as X; config from cloud.resource where api.name = 'gcloud-compute-firewall-rules-list' as Y; filter '$.Y.network contains $.X.network and $.Y.sourceRanges contains 0.0.0.0/0 and $.Y.direction contains INGRESS and $.Y.allowed exists'; show Y;```
GCP Kubernetes Engine Clusters network firewall inbound rule overly permissive to all traffic This policy identifies Firewall rules attached to the cluster network which allows inbound traffic on all protocols from the public internet. Doing so may allow a bad actor to brute force their way into the system and potentially get access to the entire cluster network. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to VPC network (Left Panel)\n3. Select Firewall rules\n4. Click on the reported firewall rule\n5. Click on the 'EDIT' button\n6. Change the 'Source IP ranges' other than '0.0.0.0/0'\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = (['properties.webApplicationFirewallConfiguration'] does not exist and ['properties.firewallPolicy'] does not exist) or (['properties.webApplicationFirewallConfiguration'].enabled is false and ['properties.firewallPolicy'] does not exist)```
Azure Application Gateway does not have the Web application firewall (WAF) enabled This policy identifies Azure Application Gateways that do not have Web application firewall (WAF) enabled. As a best practice, enable WAF to manage and protect your web applications behind the Application Gateway from common exploits and vulnerabilities. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'Application gateways', and select the application gateway you need to modify\n3. Select 'Web Application Firewall' under 'Settings'\n4. Change the 'Tier' to 'WAF' or 'WAF V2' and 'Firewall status' to 'Enabled'\n5. 'Save' your changes.
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains CreateVpc and $.X.filterPattern contains DeleteVpc and $.X.filterPattern contains ModifyVpcAttribute and $.X.filterPattern contains AcceptVpcPeeringConnection and $.X.filterPattern contains CreateVpcPeeringConnection and $.X.filterPattern contains DeleteVpcPeeringConnection and $.X.filterPattern contains RejectVpcPeeringConnection and $.X.filterPattern contains AttachClassicLinkVpc and $.X.filterPattern contains DetachClassicLinkVpc and $.X.filterPattern contains DisableVpcClassicLink and $.X.filterPattern contains EnableVpcClassicLink) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for VPC changes This policy identifies the AWS regions which do not have a log metric filter and alarm for VPC changes. Monitoring changes to VPC will help ensure that resources and services are not unintentionally exposed. It is recommended that a metric filter and alarm be established for changes made to VPCs. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and listeners.* is not empty and listeners.*.protocol equal ignore case HTTP and ruleSets.*.items[?any(redirectUri.protocol equal ignore case https)] does not exist```
OCI Load balancer listener allows connection requests over HTTP This policy identifies Oracle Cloud Infrastructure (OCI) Load Balancer listeners that accept connection requests over HTTP instead of HTTPS or HTTP/2 or TCP protocols. Accepting connections over HTTP can expose data to potential interception and unauthorized access, as HTTP traffic is transmitted in plaintext. OCI Load balancer allow all traffic to be submitted over HTTPS or HTTP/2 or TCP, ensuring all communications are encrypted. These protocols provide encrypted communication channels, safeguarding sensitive information from eavesdropping, tampering, and man-in-the-middle attacks. As a security best practice, it is recommended to configure the listeners to accept connections through HTTPS, HTTP/2, or TCP, thereby enhancing the protection of data in-transit. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate we have 2 options:\n- Update existing Load balancer listener to redirect HTTP traffic to HTTPS by creating Rule set.\n- Delete existing listener associated and Create new listener protocol other than HTTP.\n\nTo redirect Load balancer HTTP traffic to HTTPS, follow:\n1. Log in to OCI console\n2. Open Networking -> Load Balancers\n3. Click on the reported load balancer to open the details page\n4. From the Resources pane, select 'Rule Sets' and then click on 'Create Rule Set' button\n5. Choose name for Rule set and select 'Specify URL Redirect Rules'\n6. In Redirect to section: Set 'Protocol' to HTTPS and 'Port' to 443; choose other parameters as per your requirement.\n7. Click on 'Create'\n\nTo create new listener with protocol other than HTTP, follow:\n1. Log in to OCI console\n2. Open Networking -> Load Balancers\n3. Click on the reported load balancer to open the details page\n4. From the Resources pane, select 'Listeners' and then click on 'Create Listener' button\n5. In Create Listener dailog, Select other parameters and 'Protocol' other than HTTP as per your requirement.\n7. Click on 'Create'\n\nTo delete existing listener, follow:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managinglisteners_topic-Deleting_Listeners.htm.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.adaptiveApplicationControlsMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'```
Azure Microsoft Defender for Cloud adaptive application controls monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have adaptive application controls monitoring set to disabled. Adaptive Application Controls will make sure that only certain applications can run on your VMs in Microsoft Azure. This will prevent any malicious, unwanted, or unsupported software on the VMs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Adaptive application controls for defining safe applications should be enabled on your machines' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals "ACTIVE" and shieldedInstanceConfig.enableVtpm is false```
GCP Vertex AI Workbench user-managed notebook has vTPM disabled This policy identifies GCP Vertex AI Workbench user-managed notebooks that have Virtual Trusted Platform Module (vTPM) feature disabled. Virtual Trusted Platform Module (vTPM) validates guest VM pre-boot and boot integrity and offers key generation and protection. The vTPM’s root keys and the keys it generates can’t leave the vTPM, thus gaining enhanced protection from compromised operating systems or highly privileged project admins. It is recommended to enable virtual TPM device on supported virtual machines to facilitate measured Boot and other OS security features that require a TPM. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-vpc-peering-connections' AND json.rule = $.accepterVpcInfo.ownerId does not equal $.requesterVpcInfo.ownerId and $.status.code equals active```
AWS VPC allows unauthorized peering This policy identifies the VPCs which have unauthorized peering. The recommended best practice is to disallow VPC peering between two VPCs from different AWS accounts, as this potentially enables unauthorized access to private resources. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to AWS VPC console at https://console.aws.amazon.com/vpc/\n3. In the left navigation panel, select Peering Connection\n4. Choose the reported Peering Connection\n5. Click on Actions and select 'Delete VPC Peering Connection'\n6. click on Yes, Delete.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = deleteRetentionPolicy.blob.enabled is false and (kind does not equal ignore case FileStorage)```
Azure Storage account soft delete is disabled This policy identifies Azure Storage accounts which has soft delete disabled. Azure Storage contains important access logs, financial data, personal and other secret information which is accidentally deleted by a user or application could cause data loss or data unavailability. It is recommended to enable soft delete setting in Azure Storage accounts. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Soft delete on your storage account, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/blobs/soft-delete-blob-enable?tabs=azure-portal#enable-blob-soft-delete.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = publicAccessType does not equal NoPublicAccess```
OCI Object Storage bucket is publicly accessible This policy identifies the OCI Object Storage buckets that are publicly accessible. Monitoring and alerting on publicly accessible buckets will help in identifying changes to the security posture and thus reduces risk for sensitive data being leaked. It is recommended that no bucket be publicly accessible. This is applicable to oci cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the Edit Visibility\n5. Select Visibility as Private\n6. Click Save Changes.
```config from cloud.resource where cloud.accountgroup = 'Flowlog-sol' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "sol-test" ```
Sol-test config policy This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = vpcoptions.securityGroupIds[*] exists as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[*].ipv4Ranges[*].cidrIp equals 0.0.0.0/0 or ipPermissions[*].ipv6Ranges[*].cidrIpv6 equals ::/0) as Y; filter '$.X.vpcoptions.securityGroupIds[*] contains $.Y.groupId'; show Y;```
AWS OpenSearch attached security group overly permissive to all traffic This policy identifies AWS OpenSearch attached Security group that are overly permissive to all traffic. Security group enforces IP-based access policies to OpenSearch. As a best practice, restrict traffic solely from known static IP addresses or CIDR range. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on 'Inbound Rules'\n5. Remove the rule which has the 'Source' value as 0.0.0.0/0 or ::/0.
```config from cloud.resource where api.name = 'aws-ecs-cluster' and json.rule = configuration.executeCommandConfiguration.logConfiguration.s3EncryptionEnabled exists and configuration.executeCommandConfiguration.logConfiguration.s3EncryptionEnabled is false```
AWS ECS Cluster S3 Log Encryption Disabled This policy alerts you when an AWS ECS cluster is detected with S3 log encryption disabled, potentially exposing sensitive data in your logs. By ensuring that the s3EncryptionEnabled field is set to true, you can enhance the security of your cloud environment by protecting log data from unauthorized access and maintaining compliance with data protection regulations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(3306,3306) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on MySQL DB port (3306) This policy identifies GCP Firewall rules which allow all inbound traffic on MySQL DB port (3306). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the MySQL DB port (3306) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith lambda:)] exists```
AWS Lambda IAM policy overly permissive to all traffic This policy identifies AWS Lambda IAM policies that are overly permissive to all traffic. It is recommended that the Lambda should be granted access restrictions so that only authorized users and applications have access to the service. For more details: https://docs.aws.amazon.com/lambda/latest/dg/security-iam.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'lambda' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-lambda-get-region-summary' AND json.rule = 'lambdaCodeSize.size > 67500000'```
AWS Lambda nearing availability code storage limit This policy identifies Lambda nearing availability code storage limit per region. AWS provides a reasonable starting amount of compute and storage resources that you can use to run and store functions. As a best practice, it is recommended to either remove the functions that you no longer in use or reduce the code size of the functions that you do not want to remove. It will also help you avoid unexpected charges on your bill. NOTE: As per https://docs.aws.amazon.com/lambda/latest/dg/limits.html. On the date, Lambda account limit per region is 75 GB. This policy will trigger an alert if Lambda account limit per region reached to 90% (i.e. 67500000 KB) of resource availability limit allocated. If you need more Lambda account code storage size per region, You can contact AWS for a service limit increase. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the AWS Lambda Dashboard\n4. Click on 'Functions', choose each the lambda function \n5. Either remove the function that you no longer use or deduce the code size of the function that if you do not want to remove..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = restrictions.apiTargets does not exist```
GCP API key not restricting any specific API This policy identifies GCP API keys that are not restricting any specific APIs. API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services'\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. In the 'Key restrictions' section go to 'API restrictions'.\n5. Select the 'Restrict key' and from the drop-down, choose an API.\n6. Click 'SAVE'.\nNote: Do not set 'API restrictions' to 'Google Cloud APIs', as this option allows access to all services offered by Google cloud..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'databaseVersion contains SQLSERVER and state equals RUNNABLE and (settings.databaseFlags[*].name does not contain "remote access" or settings.databaseFlags[?any(name contains "remote access" and value contains on)] exists)'```
GCP SQL server instance database flag remote access is not set to off This policy identifies GCP SQL server instances for which database flag remote access is not set to off. The remote access option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target. It is recommended to set the remote access database flag for SQL Server instance to off. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag 'remote access' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Flags and parameters', choose the flag 'remote access' and set the value as 'off'\n6. Click on DONE\n7. Click on SAVE.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and not(customerManagedKey contains cryptoKeys)```
GCP Memorystore for Redis instance not encrypted with CMEK This policy identifies Memorystore for Redis instances not encrypted with CMEK. GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. By using CMEK with Redis instance, you retain complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in certain industries. It is recommended to encrypt Redis instance data using a Customer-Managed Encryption Key (CMEK). This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Encryption cannot be changed for existing Memorystore for Redis instances. A new Memorystore for Redis instance should be created to use CMEK for encryption.\n\nTo create a new Memorystore for Redis instance with CMEK encryption, please refer to the steps below:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Click on the 'CREATE INSTANCE'\n3. Provide all the other details as per the requirements\n4. Under 'Security', under 'Encryption' select the 'Cloud KMS key' checkbox\n5. Select the KMS key you prefer\n5. Click on the 'CREATE INSTANCE'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverBlobAuditingPolicy.properties.state equal ignore case Enabled and serverBlobAuditingPolicy.properties.storageEndpoint is not empty and (serverBlobAuditingPolicy.properties.retentionDays does not equal 0 and serverBlobAuditingPolicy.properties.retentionDays < 91)```
Azure SQL Server audit log retention is less than 91 days Audit Logs can help you find suspicious events, unusual activity, and trends. Auditing the SQL server, at the server-level, allows you to track all existing and newly created databases on the instance. This policy identifies SQL servers which do not retain audit logs for more than 90 days. As a best practice, configure the audit logs retention time period to be greater than 90 days. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'SQL servers' dashboard\n3. Select the SQL server instance you want to modify\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireNumbers is false or requireNumbers does not exist'```
AWS IAM password policy does not have a number Checks to ensure that IAM password policy requires a number. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one number'.\n4. Click on 'Apply password policy'.
```config from cloud.resource where api.name = 'ibm-vpc-block-storage-volume' as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```
API testing This is applicable to ibm cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='log_connections')].properties.value equals OFF or configurations.value[?(@.name=='log_connections')].properties.value equals off"```
Azure PostgreSQL database server with log connections parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log connections. Enabling log_connections helps PostgreSQL Database to log attempted connection to the server, as well as successful completion of client authentication. Log data can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'settings’ block\n5. From the list of parameters find 'log_connections' and set it to 'on'\n6. Click on 'Save' button from top menu to save the change..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and (Action contains SNS:Publish or Action contains sns:Publish) and (Condition does not exist or Condition all empty))] exists```
AWS SNS topic policy overly permissive for publishing This policy identifies AWS SNS topics that have SNS policy overly permissive for publishing. When a message is published, Amazon SNS attempts to deliver the message to the subscribed endpoints. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#implement-least-privilege-access This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can publish messages to the topic.\n9. Click on 'Save changes'.
```config from cloud.resource where api.name = 'gcloud-domain-users' AND json.rule = isAdmin is false and isEnrolledIn2Sv is false and archived is false and suspended is false```
GCP Google Workspace User not enrolled with 2-step verification This policy identifies Google Workspace Users who do not have 2-Step Verification enabled. Enabling 2-Step Verification for Google Workspace users significantly enhances account security by adding an additional layer of authentication beyond just passwords. This reduces the risk of unauthorized access, protects sensitive data, and ensures compliance with security best practices. Implementing this measure strengthens overall organizational security and helps safeguard against potential cyber threats. It is recommended to enable 2-Step Verification for all users as it provides increased security for user account settings and resources. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Workspace users should be allowed to turn on 2-Step verification (2SV) before enabling 2SV. Follow the steps mentioned below to allow users to turn on 2SV.\n1. Sign in to Workspace Admin Console with an administrator account. \n2. Go to Menu then 'Security' > 'Authentication' > '2-step verification'.\n3. Check the 'Allow users to turn on 2-Step Verification' box.\n4. Select 'Enforcement' as per need.\n5. Click Save.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/a/answer/9176657\n\n\nTo enable 2-Step Verification for GCP Workspace User accounts, follow the steps below.\n1. Open your Google Account.\n2. In the navigation panel, select 'Security'.\n3. Under 'How you sign in to Google', select '2-Step Verification' > 'Get started'.\n4. Follow the on-screen steps.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/accounts/answer/185839.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-logging-sinks-list' AND json.rule = name contains "pk"```
pk-gcp-empty This is applicable to gcp cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createidentityprovider and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteidentityprovider and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateidentityprovider) and actions.actions[*].topicId exists' as X; count(X) less than 1```
OCI Event Rule and Notification does not exist for Identity Provider changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Identity Provider changes. Monitoring and alerting on changes to Identity Provider will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity Provider. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Identity Provider – Create, Identity Provider - Delete and Identity Provider – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule.
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "errorCode=" or $.X.filterPattern contains "errorCode =") and ($.X.filterPattern does not contain "errorCode!=" and $.X.filterPattern does not contain "errorCode !=") and $.X.filterPattern contains "UnauthorizedOperation" and $.X.filterPattern contains "AccessDenied") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for unauthorized API calls This policy identifies the AWS regions which do not have a log metric filter and alarm for unauthorized API calls. Monitoring unauthorized API calls will help reveal application errors and may reduce the time to detect malicious activity. It is recommended that a metric filter and alarm be established for unauthorized API calls. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.errorCode = "*UnauthorizedOperation") || ($.errorCode = "AccessDenied*") }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals ACTIVE and containerDefinitions[?any(logConfiguration.logDriver does not exist)] exists```
AWS ECS task definition logging configuration disabled This policy identifies AWS ECS task definitions that have logging configuration disabled. AWS ECS logging involves capturing and storing container logs for monitoring, troubleshooting, and analysis purposes within the Amazon ECS environment. Collecting data from task definitions gives visibility, which can aid in debugging processes and determining the source of issues. It is recommended to configure logging for an AWS ECS task definition. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable log configuration for your Amazon ECS task definitions, follow these steps:\n\n1. Sign into the AWS console and navigate to the Amazon ECS console\n2. In the navigation pane, choose 'Task definitions'\n3. Choose the task definition to be updated\n4. Select 'Create new revision', and then click on 'Create new revision'.\n5. On the 'Create new task definition revision' page, select the container with logging configuration disabled\n6. Under the 'Logging' section, enable 'Use log collection'\n7. Select the log driver to be used under the dropdown\n8. At 'awslogs-group', specify the log group that the logdriver sends its log streams to\n9. Specify the remaining configuration as per the requirements\n10. Choose 'Update'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-athena-workgroup' AND json.rule = WorkGroup.State equal ignore case enabled and (WorkGroup.Configuration.ResultConfiguration.EncryptionConfiguration does not exist or (WorkGroup.Configuration.EngineVersion.EffectiveEngineVersion contains Athena and WorkGroup.Configuration.EnforceWorkGroupConfiguration is false))```
AWS Athena Workgroup data encryption at rest not configured This policy identifies AWS Athena workgroups not configured with data encryption at rest. AWS Athena workgroup enables you to isolate queries for you or your group of users from other queries in the same account, to set the query results location and the encryption configuration. By default, Athena workgroup query run results are not encrypted at rest and client side settings can override the workgroup settings. Encrypting workgroups and preventing overrides from the client side helps in protecting the integrity and confidentiality of the data stored on Athena. It is recommended to set encryption at rest and enable 'override client-side settings' to mitigate the risk of unauthorized access and potential data breaches. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption at rest for the Athena workgroup, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon Athena console.\n2. Under the navigation bar, click on Workgroups.\n3. Select the alerted workgroup. Click on 'Edit'.\n4. For Athena-based engines, under 'Query result configuration', enable 'Encrypt query results'.\n5. Select 'Encryption type' based on the requirements. Make sure to set 'Minimum encryption'.\n6. Under 'Settings', enable 'Override client-side settings'.\n7. For Apache Spark-based engines, under 'Calculation result settings', enable 'Encrypt query results'.\n8. Select 'Encryption type' based on the requirements.\n9. Click on 'Save changes'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kusto-clusters' AND json.rule = properties.state equal ignore case Running and properties.enableDiskEncryption is false```
Azure Data Explorer cluster disk encryption is disabled This policy identifies Azure Data Explorer clusters in which disk encryption is disabled. Enabling encryption at rest on your cluster provides data protection for stored data. It is recommended to enable disk encryption on Data Explorer clusters. For more details: https://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-disk This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Disk encryption on existing Data Explorer cluster, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-disk.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of ("allAuthenticatedUsers","allUsers"))] exists```
GCP Cloud Function is publicly accessible by allUsers or allAuthenticatedUsers This policy identifies GCP Cloud Functions that are publicly accessible by allUsers or allAuthenticatedUsers. This includes both Cloud Functions v1 and Cloud Functions v2. Granting permissions to 'allusers' or 'allAuthenticatedUsers' on any resource in GCP makes the resource public. Public access over cloud functions can lead to unauthorized invocations of the function or leakage of sensitive information such as the function's source code. Following the least privileged access policy, it is recommended to grant access restrictively and avoid granting permissions to allUsers or allAuthenticatedUsers unless absolutely needed. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to service 'Cloud Functions'\n4. Select the required cloud function\n5. Click on 'PERMISSIONS' button\n6. Filter for 'allUsers'\n7. Click on the 'Remove principal' button (bin icon)\n8. Select 'Remove allUsers from all roles on this resource. They may still have access via inherited roles.'\n9. Click 'Remove'\n10. Repeat steps 6-9 for 'allAuthenticatedUsers'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.phone is empty)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```
Azure Microsoft Defender for Cloud security contact phone number is not set This policy identifies Subscriptions that are not set with security contact phone number for Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender). It is recommended to set security contact phone number to receive notifications when Microsoft Defender for Cloud detects compromised resources. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Use below Azure CLI example command to create new contact with phone number details for Azure Microsoft Defender for Cloud,\n\naz security contact create -n "default1" --email '[email protected]' --phone '214275-4038' --alert-notifications 'on' --alerts-admins 'on'\n\nFor more information:\nhttps://docs.microsoft.com/en-us/cli/azure/security/contact?view=azure-cli-latest.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'origins.items[*].s3OriginConfig exists and origins.items[*].s3OriginConfig.originAccessIdentity is empty and origins.items[*].originAccessControlId is empty'```
AWS Cloudfront Distribution with S3 have Origin Access set to disabled This policy identifies the AWS CloudFront distributions which are utilizing S3 bucket and have Origin Access Disabled. The origin access identity feature should be enabled for all your AWS CloudFront CDN distributions in order to restrict any direct access to your objects through Amazon S3 URLs. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to CloudFront\n3. Choose the reported Distribution\n4. Click on Distribution Settings\n5. Click on 'Origins and Origin Groups\n6. Select the S3 bucket and click on Edit\n7. On the 'Restrict Bucket Access', Select Yes\n8. Click on 'Yes, Edit'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changeroutetablecompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createroutetable and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deleteroutetable and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updateroutetable) and actions.actions[*].topicId exists' as X; count(X) less than 1```
OCI Event Rule and Notification does not exist for route tables changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for route tables changes. Monitoring and alerting on changes to route tables will help in identifying changes to traffic flowing to or from Virtual Cloud Networks and Subnets. It is recommended that an Event Rule and Notification be configured to catch changes made to route tables. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Route Table – Change Compartment, Route Table – Create, Route Table - Delete and Route Table – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(20,20)"```
Alibaba Cloud Security group allow internet traffic to FTP-Data port (20) This policy identifies Security groups that allow inbound traffic on FTP-Data port (20) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 20, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where api.name = 'ibm-key-protect-registration' as X; config from cloud.resource where api.name = 'ibm-object-storage-bucket' AND json.rule = not( locationConstraint contains "ams03" or locationConstraint contains "mon01" or locationConstraint contains "tor01" or locationConstraint contains "sjc03" or locationConstraint contains "sjc04" or locationConstraint contains "sao01" or locationConstraint contains "mil01" or locationConstraint contains "sng01" or locationConstraint contains "che01" ) as Y; filter 'not($.X.resourceCrn equals $.Y.crn)'; show Y;```
IBM Cloud Object Storage bucket is not encrypted with BYOK (bring your own key) This policy identifies IBM Cloud Storage buckets that are not encrypted with BYOK (Bring your own key). Bring your Own Key (BYOK) allows customers to ensure no one outside their organisation has access to the root key and with the support of BYOK, customers can manage the lifecycle of their customer root keys where they can create, rotate, delete those keys. As a security best practice, it is recommended to use BYOK encryption key management system, which provides a significant level of control on the keys when used for encryption. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: IBM Cloud object storage bucket can be encrypted with Bring your own key (BYOK) only at the time of creation. \n\nPlease create a bucket with bring your own key encryption along with other required configuration as required using the below URL:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-tutorial-kp-encrypt-bucket#kp-encrypt-bucket-create\n\nOnce the new bucket is created, Please transfer existing bucket objects to the new bucket with proper encryption configured using the below URL:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-region-copy\n\nTo delete the alerted bucket, please follow the below instructions:\n1. Log in to the IBM Cloud Console\n2. Click on the 'Navigation Menu' icon and navigate to 'Resource list'. Under the 'Storage' section, select the object storage instance in which the reported bucket resides.\n3. For the alerted bucket, select the 'Delete bucket' option from the kebab menu.\n4. In the 'Delete Bucket' dialog, select 'Delete bucket'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3-access-point' AND json.rule = networkOrigin equal ignore case internet and (publicAccessBlockConfiguration does not exist or (publicAccessBlockConfiguration.blockPublicAcls is false and publicAccessBlockConfiguration.ignorePublicAcls is false and publicAccessBlockConfiguration.blockPublicPolicy is false and publicAccessBlockConfiguration.restrictPublicBuckets is false))```
AWS S3 access point Block public access setting disabled This policy identifies AWS S3 access points with the block public access setting disabled. AWS S3 Access Point simplifies managing data access by creating unique access control policies for specific applications or users within a S3 bucket. The Amazon S3 Block Public Access feature manages access at the account, bucket, and access point levels. Each level's settings can be configured independently but cannot override more restrictive settings at higher levels. Instead, access point settings complement those at the account and bucket levels. It is recommended to enable the Block public access setting on a S3 access point unless intended for public exposure. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Block public access setting can be enabled at creation time only:\n\n1. Sign in to the AWS Management Console and navigate to the Amazon S3 dashboard\n2. In the left navigation pane, choose 'Access Points'\n3. On the Access Points page, choose 'Create access point'\n4. In the Access point name field, enter the name of the access point\n5. Under 'Block Public Access settings for this Access Point', make sure to select 'Block all public access'\n6. Click on 'Create access point'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.disableKeyBasedMetadataWriteAccess is false```
Azure Cosmos DB key based authentication is enabled This policy identifies Cosmos DBs that are enabled with key-based authentication. Disabling key-based metadata write access on Azure Cosmos DB prevents any changes to resources from a client connecting using the account keys. It is recommended to disable this feature for organizations who want higher degrees of control and governance for production environments. NOTE: Enabling this feature can have an impact on your application. Make sure that you understand the impact before enabling it. Refer for more details: https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control#check-list-before-enabling This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following URL to disable key-based metadata write access on your Azure Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control#prevent-sdk-changes.
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' as X; count(X) less than 1```
AWS CloudTrail is not enabled on the account Checks to ensure that CloudTrail is enabled on the account. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on CloudTrail to get a complete audit trail of activities across various services. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'CloudTrail' service.\n2. Follow the instructions below to enable CloudTrail on the account.\nhttp://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains SQLSERVER and state equals RUNNABLE and (settings.databaseFlags[*].name does not contain 3625 or settings.databaseFlags[?any(name contains 3625 and value contains off)] exists)"```
GCP SQL server instance database flag 3625 (trace flag) is not set to on This policy identifies GCP SQL server instance for which database flag 3625 (trace flag) is not set to on. Trace flag can help prevent the disclosure of sensitive information by masking the parameters of some error messages using '*', for users who are not members of the sysadmin fixed server role. It is recommended to set 3625 (trace flag) database flag for Cloud SQL SQL Server instance to on. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag '3625' from the drop-down menu and set the value as 'On'\nOR\nIf the flag has been set to other than on, Under 'Flags and parameters', choose the flag '3625' and set the value as 'On'\n6. Click on DONE\n7. Click on SAVE.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireLowercaseCharacters does not exist or requireLowercaseCharacters is false'```
Alibaba Cloud RAM password policy does not have a lowercase character This policy identifies Alibaba Cloud accounts that do not have a lowercase character in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Lowercase Letters'\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and ($.X.filterPattern contains "eventSource=" or $.X.filterPattern contains "eventSource =") and ($.X.filterPattern does not contain "eventSource!=" and $.X.filterPattern does not contain "eventSource !=") and $.X.filterPattern contains config.amazonaws.com and $.X.filterPattern contains StopConfigurationRecorder and $.X.filterPattern contains DeleteDeliveryChannel and $.X.filterPattern contains PutDeliveryChannel and $.X.filterPattern contains PutConfigurationRecorder) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for AWS Config configuration changes This policy identifies the AWS regions which do not have a log metric filter and alarm for AWS Config configuration changes. Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to AWS Config's configurations. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where api.name = 'aws-apigateway-get-stages' AND json.rule = webAclArn is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webAclArn'; show X;```
AWS API Gateway Rest API attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS API Gateway Rest API attached with WAFv2 WebACL which are not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, API Gateway Rest API attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the API Gateway console\n3. Click on the reported API Gateway REST API\n4. In the Stages pane, choose the name of the stage\n5. In the Stage Editor pane, choose the Settings tab\n6. Note down the associated AWS WAF web ACL\n7. Go to the noted WAF web ACL in AWS WAF & Shield Service\n8. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n9. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n10. Click on 'Add rules'.
```config from cloud.resource where api.name = 'aws-dms-replication-task' AND json.rule = ReplicationTaskSettings.Logging.EnableLogging is false or ReplicationTaskSettings.Logging.LogComponents[?any( Id is member of ("TARGET_APPLY","TARGET_LOAD") and Severity is not member of ("LOGGER_SEVERITY_DEFAULT","LOGGER_SEVERITY_DEBUG","LOGGER_SEVERITY_DETAILED_DEBUG") )] exists```
AWS DMS replication task for the target database have logging not set to the minimum severity level This policy identifies the DMS replication tasks that are logging isn't enabled or the minimum severity level is less than LOGGER_SEVERITY_DEFAULT for TARGET_APPLY and TARGET_LOAD. Amazon DMS Logging is crucial in DMS replication for monitoring, troubleshooting, auditing, performance analysis, error detection, recovery, and historical reporting. TARGET_APPLY and TARGET_LOAD must be logged because they manage to apply data and DDL changes, as well as loading data into the target database, crucial for maintaining data integrity during migration. The absence of logging for TARGET_APPLY and TARGET_LOAD components hampers monitoring, compliance, auditing, troubleshooting, and accountability efforts during migration. It's recommended to enable logging for AWS DMS replication tasks and set a minimal logging level of DEFAULT for TARGET_APPLY and TARGET_LOAD to ensure that informational messages, warnings, and error messages are written to the logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for Target Apply and Target Load DMS replication tasks log component during migration:\n\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to 'Migration & Transfer' from the 'Services' dropdown and select 'Database Migration Service'\n4. In the navigation panel, under 'Migrate data', click on 'Database migration tasks'\n5. Select the reported replication task and choose 'Modify' from the 'Actions' dropdown on the right\n6. Under the 'Task settings' section, enable 'Turn on CloudWatch logs' under 'Task logs'\n7. Set the log component severity for both 'Target apply' and 'Target Load' components to 'Default' or greater according to your business requirements\n8. Click 'Save' to save the changes.
```config from cloud.resource where api.name = 'oci-networking-networkloadbalancer' and json.rule = lifecycleState equal ignore case "ACTIVE" as X; config from cloud.resource where api.name = 'oci-networking-subnet' and json.rule = lifecycleState equal ignore case "AVAILABLE" as Y; config from cloud.resource where api.name = 'oci-networking-security-list' AND json.rule = lifecycleState equal ignore case AVAILABLE as Z; filter 'not ($.X.listeners does not equal "{}" and ($.X.subnetId contains $.Y.id and $.Y.securityListIds contains $.Z.id and $.Z.ingressSecurityRules is not empty))'; show X;```
OCI Network Load Balancer not configured with inbound rules or listeners This policy identifies Network Load Balancers that are not configured with inbound rules or listeners. A Network Load Balancer's subnet security lists should include ingress rules, and the Network Load Balancer should have at least one listener to handle incoming traffic. Without these configurations, the Network Load Balancer cannot receive and route incoming traffic, rendering it ineffective. As best practice, it is recommended to configure Network Load Balancers with proper inbound rules and listeners. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Network Load Balancers with inbound rules and listeners, refer to the following documentation:\nhttps://docs.cloud.oracle.com/iaas/Content/Security/Reference/configuration_tasks.htm#lb-enable-traffic.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains SQLSERVER and settings.databaseFlags[?(@.name=='contained database authentication')].value equals on"```
GCP SQL Server instance database flag 'contained database authentication' is enabled This policy identifies SQL Server instance database flag 'contained database authentication' is enabled. Most of the threats associated with contained database are related to authentication process. So it is recommended to disable this flag. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on SQL Server instance for which you want to disable the database flag from the list\n4. Click 'Edit'\n5. Go to 'Flags and Parameters' under 'Configuration options' section\n6. Search for the flag 'contained database authentication' and set the value 'off'\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-project-info' AND json.rule = 'commonInstanceMetadata.items[*].key does not contain enable-oslogin or (commonInstanceMetadata.items[?any(key contains enable-oslogin and (value contains false or value contains FALSE))] exists)'```
GCP Projects have OS Login disabled This policy identifies GCP Projects which have OS Login disabled. Enabling OS Login ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user. It facilitates centralized and automated SSH key pair management which is useful in handling cases like a response to compromised SSH key pairs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Navigate to service 'Computer Engine' (Left Panel)\n3. For setting project-level OS login configuration, go to the 'Metadata' section under 'Settings'(from Left Panel)\n4. Click on the 'Edit' button\n5. If the metadata for 'enable-oslogin' is not set, click on '+Add item' and add metadata entry key as 'enable-oslogin' and the value as 'TRUE'/'true'\n6. Click on 'Save' to apply the changes\n7. You need to validate if any overriding instance-level metadata is set,\n8. Go to the tab 'VM instances', under section 'Virtual machines', \n9. For every instance, click on 'Edit'\n10. Under Custom metadata, remove any entry with key 'enable-oslogin' and the value 'FALSE'/'false'\n11. At the bottom of the 'VM instance details' page, click 'Save' to apply your changes to the instance..
```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowSharedKeyAccess is true```
Azure Storage account configured with Shared Key authorization This policy identifies Azure Storage accounts configured with Shared Key authorization. Azure Storage accounts authorized with Shared Key authorization via Shared Access Signature (SAS) tokens pose a security risk, as they allow sharing information with external unidentified identities. It is highly recommended to disable Shared Key authorization and Use Azure AD authorization as it provides superior security and ease of use over Shared Key. For more details: https://learn.microsoft.com/en-us/azure/storage/common/shared-key-authorization-prevent This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To prevent Shared Key authorization for an Azure Storage account, follow bellow URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/shared-key-authorization-prevent.
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains "resource.type =" or $.X.filter contains "resource.type=" ) and ( $.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=" ) and $.X.filter contains "gce_route" and ( $.X.filter contains "protoPayload.methodName:" or $.X.filter contains "protoPayload.methodName :" ) and ( $.X.filter does not contain "protoPayload.methodName!:" and $.X.filter does not contain "protoPayload.methodName !:" ) and $.X.filter contains "compute.routes.delete" and $.X.filter contains "compute.routes.insert"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for VPC network route delete and insert This policy identifies GCP accounts which do not have a log metric filter and alert for VPC network route delete and insert events. Monitoring network routes deletion and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the deletion and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gce_route" AND (protoPayload.methodName:"compute.routes.delete" OR protoPayload.methodName:"compute.routes.insert")\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-client-vpn-endpoint' AND json.rule = status.code equal ignore case available and connectionLogOptions.Enabled is false```
AWS EC2 Client VPN endpoints client connection logging disabled This policy identifies AWS EC2 client VPN endpoints with client connection logging disabled. AWS Client VPN endpoints enable remote clients to securely connect to resources in the Virtual Private Cloud (VPC). Connection logs enable you to track user behaviour on the VPN endpoint and gain visibility. It is recommended to enable connection logging for AWS EC2 client VPN endpoints. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable connection logging for a new Client VPN endpoint, follow these steps:\n\n1. Sign into the AWS console and navigate to the Amazon VPC console\n2. In the navigation pane, choose 'Client VPN Endpoints'\n3. Select the 'Client VPN endpoint', choose 'Actions', and then choose 'Modify Client VPN endpoint'\n4. Under 'Connection logging', turn on 'Enable log details on client connections'\n5. For 'CloudWatch Logs log group name', choose the name of the CloudWatch Logs log group\n6. (Optional) For 'CloudWatch Logs log stream name', choose the name of the CloudWatch Logs log stream\n7. Choose 'Modify Client VPN endpoint'.
```config from cloud.resource where api.name = 'alibaba-cloud-ecs-disk' AND json.rule = category contains "foo" ```
bobby 3/28 This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals "8.8.8.8/32" and direction equals "outbound" and ( protocol equals "all" or ( protocol equals "tcp" and ( port_max greater than 53 and port_min less than 53 ) or ( port_max equals 53 and port_min equals 53 ))))] exists```
IBM Cloud Virtual Private Cloud (VPC) security group contains outbound rules that specify source IP 8.8.8.8/32 to DNS port This policy identifies IBM Virtual Private Cloud (VPC) security groups that contain outbound rules that specify a source IP 8.8.8.8/32 to DNS port. Doing so, may allow sensitive data from the protected resource being leaked to Google, which uses data for indexing and monetizing. As a best practice, restrict DNS port (53) solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Outbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Destination type' as 'Any' and 'Value' as 53 (or range containing 53)\n6. Click on 'Delete'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'viewerCertificate.certificateSource contains cloudfront'```
AWS CloudFront web distribution with default SSL certificate This policy identifies CloudFront web distributions which have a default SSL certificate to access CloudFront content. It is a best practice to use custom SSL Certificate to access CloudFront content. It gives you full control over the content data. custom SSL certificates also allow your users to access your content by using an alternate domain name. You can use a certificate stored in AWS Certificate Manager (ACM) or you can use a certificate stored in IAM. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On the 'General' tab, Click on the 'Edit' button\n6. On 'Edit Distribution' page set 'SSL Certificate' to 'Custom SSL Certificate (example.com):', Select a certificate or type your certificate ARN in the field and other parameters as per your requirement.\n7. Click on 'Yes, Edit'.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(3306,3306) or destinationPortRanges[*] contains _Port.inRange(3306,3306) ))] exists```
Azure Network Security Group allows all traffic on MySQL (TCP Port 3306) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on MySQL (TCP Port 3306). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MySQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = '(_DateTime.ageInDays($.notAfter) > -1) and status equals EXPIRED'```
AWS Certificate Manager (ACM) has expired certificates This policy identifies expired certificates which are in AWS Certificate Manager. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. This policy generates alerts if there are any expired ACM managed certificates. As a best practice, it is recommended to delete expired certificates. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Verify that the 'Status' column shows 'Expired' for the reported certificate\n6. Under 'Actions' drop-down click on 'Delete'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'ipAllocationPolicy.useIpAliases does not exist or ipAllocationPolicy.useIpAliases equals false'```
GCP Kubernetes Engine Clusters have Alias IP disabled This policy identifies Kubernetes Engine Clusters which have disabled Alias IP. Alias IP allows the networking layer to perform anti-spoofing checks to ensure that egress traffic is not sent with arbitrary source IPs. By enabling Alias IPs, Kubernetes Engine clusters can allocate IP addresses from a CIDR block known to Google Cloud Platform. This makes your cluster more scalable and allows your cluster to better interact with other GCP products and entities. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Alias IP can be enabled only at the time of creation of clusters. So to fix this alert, create a new cluster with Alias IP enabled and then migrate all required cluster data or containers from the reported cluster to this new cluster.\nTo create the cluster with Alias IP enabled, perform following steps:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n5. Click on 'CREATE CLUSTER' button\n6. Configure your cluster and click on 'More'\n7. From the 'VPC-native (using alias IP)' drop-down menu, select 'Enabled'. New menu items appear\n8. From 'Automatically create secondary ranges' drop-down menu, select 'Enabled' \n9. Configure the 'Network', 'Node subnet', 'Node address range', 'Container address range', and 'Service address range' as needed\n10. Click on Create.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and (Action contains SNS:Subscribe or Action contains sns:Subscribe or Action contains SNS:Receive or Action contains sns:Receive) and Condition does not exist)] exists```
AWS SNS topic policy overly permissive for subscription This policy identifies AWS SNS topics that have SNS policy overly permissive for the subscription. When you subscribe an endpoint to a topic, the endpoint begins to receive messages published to the associated topic. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#implement-least-privilege-access This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can subscribe to this topic.\n9. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-describe-vpc-endpoints' AND json.rule = vpcEndpointType equals Gateway and policyDocument.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and Action contains * and Condition does not exist)] exists```
AWS VPC gateway endpoint policy is overly permissive This policy identifies AWS VPC gateway endpoints that have a VPC endpoint (VPCE) policy that is overly permissive. When the Principal element value is set to '*' within the access policy, the VPC gateway endpoint allows full access to any IAM user or service within the VPC using credentials from any AWS accounts. It is highly recommended to have the least privileged VPCE policy to protect the data leakage and unauthorized access. For more details: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the VPC dashboard\n4. Go to 'Endpoints', from the left panel VIRTUAL PRIVATE CLOUD section\n5. Select the reported VPC endpoint\n6. On the 'Actions' drop-down button, click on the 'Manage policy'\n8. On the 'Edit Policy' page, Choose 'Custom' policy\na. Then add policy, without the 'Everyone' grantee (i.e. '*' or 'AWS': '*') from the Principal element value with an AWS account ID (e.g. '123456789'), an AWS account ARN (e.g. 'arn:aws:iam::123456789:root') or an IAM user ARN (e.g. 'arn:aws:iam::123456789:user/vpce-admin').\nb. Add a Condition clause to the policy statement to filter the endpoint access to specific entities.\n9. Click on 'Save'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals arbitrary and state_description equal ignore case active and (_DateTime.ageInDays(last_update_date) > 90)'```
IBM Cloud Secrets Manager arbitrary secrets have aged more than 90 days without being rotated This policy identifies IBM Cloud Secrets Manager arbitrary secrets that have aged more than 90 days without being rotated. Arbitrary secrets should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all arbitrary secrets are regularly rotated. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides, under security section.\n3. Select the secret and click on 'Actions' dropdown.\n4. Select 'Rotate' from the dropdown.\n5. In the 'Rotate secret' screen, provide data as required.\n6. Click on 'Rotate'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.ipRangeFilter is not empty and properties.ipRangeFilter startsWith 0.0.0.0 or properties.ipRangeFilter endsWith 0.0.0.0```
Azure Cosmos DB allows traffic from public Azure datacenters This policy identifies Cosmos DBs that allow traffic from public Azure datacenters. If you enable this option, the IP address 0.0.0.0 is added to the list of allowed IP addresses. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. So it is recommended not to select the ‘Accept connections from within public Azure datacenters’ option for your Cosmos DB. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Azure Cosmos DB service\n3. Select the reported Azure Cosmos DB account\n4. Click on 'Firewall and virtual networks' under 'Settings'\n5. Unselect 'Accept connections from within public Azure datacenters' option under 'Exceptions'\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.keyPolicy.keyExpirationPeriodInDays does not exist```
Azure Storage account key expiration policy is not configured This policy identifies Azure Storage accounts for which key expiration policy is not configured. A key expiration policy enables you to set a reminder for the rotation of the account access keys, so that you can monitor your storage accounts for compliance to ensure that the account access keys are rotated regularly. As a best practice, it is recommended to set key expiration policy for Azure Storage account keys. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Security + networking', select 'Access keys'\n5. Select the 'Set rotation reminder' button. If the Set rotation reminder button is grayed out, you will need to rotate each of your keys manually.\n6. In Set a reminder to rotate access keys, select the 'Enable key rotation reminders' checkbox and set a frequency for the reminder.\n7. Click on 'Save'\n\nNOTE: Before you can create a key expiration policy, you may need to rotate each of your account access keys at least once..
```config from cloud.resource where api.name = 'gcloud-compute-backend-bucket' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' not (Y.name intersects X.bucketName) '; show X;```
GCP backend bucket having dangling GCP Storage bucket This policy identifies the GCP backend buckets having dangling GCP Storage bucket. A GCP backend bucket is usually used by GCP Load Balancers for serving static content. Such setups can also have DNS pointing to the load balancer's IP for easy human access. A GCP backend bucket pointing to a GCP storage bucket that doesn't exist in the same project is a potential risk of bucket takeover as well as at risk of subdomain takeover. An attacker can exploit such a setup by creating a GCP Storage bucket with the same name in their own GCP project, thus receiving all requests redirected to this backend bucket from the load balancer to an attacker-controlled GCP Storage bucket. This attacker-controlled bucket can be used to serve malicious content to perform phishing attacks, spread malware, or engage in other illegal activities. As a best practice, it is recommended to review and protect GCP storage buckets bound to a GCP backend bucket from accidental deletion. Delete the GCP backend bucket if it points to a non-existent GCP storage bucket. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To mitigate the risk, either delete the GCP backend bucket or create a GCP Storage bucket with the name in your account to which the GCP backend bucket points.\n\n\n# Delete GCP backend bucket\nTo delete a GCP backend bucket, it should be disassociated from all GCL Load balancers first. The following steps might be followed:\n\n1. Identify the backend bucket pointing to a non-existing GCP Storage bucket.\n2. Login to GCP Portal\n3. Go to Network services -> Load Balancing\n4. Click on "Backends"\n5. Note the names of load balancers that are using the GCP backend bucket. Names are shown under the "Load balancer" column\n6. Click on the "LOAD BALANCERS" tab\n7. Click on the load balancer name for each load balancer identified in step 4 and repeat the following steps:\n i. After opening the load balancer page, click on "EDIT"\n ii. Go to Backend configuration\n iii. Under the "Backend buckets" section, remove the GCP backend bucket by clicking "cross" icon in front of it\n iv. Go to Routing rules. Edit the rules as desired. Remove any rules pointing to the reported backend bucket.\n v. Click Update\n8. Click and switch back to the "Backends" tab\n9. Select the GCP backend bucket, the option to delete now should be available.\n10. Click "Delete" -> "DELETE"\n\n\n# Create a new GCP Storage bucket\nRefer to the following link on how to create a new GCP Storage bucket and create a new bucket with the same name as the one the GCP backend bucket is pointing to:\n\nhttps://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-snapshot' AND json.rule = encryption equal ignore case provider_managed```
IBM Cloud Block Storage Snapshot for VPC is not encrypted with customer managed keys This policy identifies IBM Cloud Block Storage Snapshots for VPC, which are not encrypted with customer managed keys. Using customer managed keys will increase significant control where keys are managed by customers. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A Block storage snapshot can be encrypted with customer managed keys only at the time of creation of a virtual server instance. \nPlease create a virtual service instance with boot/data disk from the reported snapshot using below URL. Please make sure to select customer managed encryption for data/boot storage volume:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-vol-ui\n\nCreate a snapshot for above created storage disk volume following below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nOnce a new snapshot got created, delete the virtual server instance to which the created storage volume/snapshot got attached:\nhttps://cloud.ibm.com/docs/hp-virtual-servers?topic=hp-virtual-servers-remove_vs#delete_vs.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = osType does not exist and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of ("EncryptionAtRestWithCustomerKey", "EncryptionAtRestWithPlatformAndCustomerKeys","EncryptionAtRestWithPlatformKey")```
Azure VM data disk is not configured with any encryption This policy identifies VM data disks that are not configured with any encryption. Azure encrypts data disks that are not configured with any encryption. Azure offers Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK] by default for managed disks. It is recommended to enable default encryption or you may optionally choose to use a customer-managed key to protect from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Disks'\n3. Select the reported OS disk you want to modify\n4. Select 'Encryption' under 'Settings'\n5. Select 'Encryption Type' according to your encryption requirement.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and (identity.type does not exist or identity.principalId is empty)```
Azure Logic app is not configured with managed identity This policy identifies Azure Logic apps that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Including credentials in code heightens the risk in the event of a security breach and increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. As a security best practice, it is recommended to set up managed identity rather than embedding credentials within the code. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.databaseEncryption.state equals DECRYPTED```
GCP Kubernetes cluster Application-layer Secrets not encrypted Application-layer Secrets Encryption provides an additional layer of security for sensitive data, such as Secrets, stored in etcd. Using this functionality, you can use a key, that you manage in Cloud KMS, to encrypt data at the application layer. This protects against attackers who gain access to an offline copy of etcd. This policy checks your cluster for the Application-layer Secrets Encryption security feature and alerts if it is not enabled. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: At this time, you cannot enable Application-layer Secrets Encryption for an existing cluster.\n\nCreating a new cluster with Application-layer Secrets Encryption.\n\n1. Go to the Kubernetes clusters page in the GCP Console and select CREATE CLUSTER.\n2. Click Advanced options.\n3. Check Enable Application-layer Secrets Encryption.\n4. Select a customer-managed key from the drop down menu, or create a new KMS key.\n5. When finished configuring options for the cluster, click Create..
```config from cloud.resource where cloud.type = 'AWS' and api.name = 'aws-ec2-describe-subnets' AND json.rule = 'mapPublicIpOnLaunch is true'```
Copy of AWS VPC subnets should not allow automatic public IP assignment This policy identifies VPC subnets which allow automatic public IP assignment. VPC subnet is a part of the VPC having its own rules for traffic. Assigning the Public IP to the subnet automatically (on launch) can accidentally expose the instances within this subnet to internet and should be edited to 'No' post creation of the Subnet. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to the 'VPC' service.\n4. In the navigation pane, click on 'Subnets'.\n5. Select the identified Subnet and choose the option 'Modify auto-assign IP settings' under the Subnet Actions.\n6. Disable the 'Auto-Assign IP' option and save it..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = ((properties.publicNetworkAccess equals Enabled and properties.networkRuleSet does not exist) or (properties.publicNetworkAccess equals Enabled and properties.networkRuleSet exists and properties.networkRuleSet.defaultAction equals Allow))```
Azure Container registries Public access to All networks is enabled This policy identifies Azure Container registries which has Public access to All networks enabled. Azure ACR is used to store Docker container images which might contain sensitive information. It is highly recommended to restrict public access from allow access from Selected networks or make it Private by disabling the Public access. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Container Registries'\n3. Select the container registry you need to modify\n4. Select 'Networking' under 'Settings'\n5. Click on 'Public access' tab and select 'Selected networks' and provide the IPV4 address for which you want access to ACR or select 'Disabled' to disable Public access\n6. Click on 'Save'\n\nNote: 'Public access' setting can be toggled to 'Selected networks' or 'Disabled' state only with Premium SKU. For Standard and Basic SKU Public access setting cannot be updated and these resource will reamin accesiable to public..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = firewallRules.value[*].properties.startIpAddress equals "0.0.0.0" or firewallRules.value[*].properties.endIpAddress equals "0.0.0.0"```
EIP-CSE-IACOHP-AzurePostgreSQL-NetworkAccessibility-eca1500-5 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-disk-list' AND json.rule = dataAccessAuthMode does not equal ignore case AzureActiveDirectory and managedBy contains virtualMachines and provisioningState equal ignore case Succeeded```
Azure disk data access authentication mode not enabled This policy identifies if the Data Access Authentication Mode for Azure disks is disabled. This mode is crucial for controlling how users upload or export Virtual Machine Disks by requiring an Azure Entra ID role to authorize such operations. Without enabling this mode, users can create SAS tokens to export disks without stringent identity-based restrictions. This increases the risk of unauthorized disk access or data exposure, especially in environments handling sensitive data. Enabling the Data Access Authentication Mode ensures that only users with the appropriate Data Operator for Managed Disk role in Azure Entra ID can export or manage disks. This enhances data security by preventing unauthorized disk exports and restricting access to secure download URLs. As a security best practice, it is recommended to enable data access authentication mode for Azure disks. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To enable data access authentication mode on disks attached to a VM, you must first stop the VM and detach the disk.\n\n1. Log in to Azure Portal and search for 'Disks'\n2. Select 'Disks'\n3. Select the reported disk\n4. Under 'Settings' select 'Disk Export'\n5. Check the 'Enable data access authentication mode' under 'Data access authentication mode'\n6. Click on 'Save'\n7. Re-attach the disk to the virtual machine, and restart it.
```config from cloud.resource where api.name = 'aws-lambda-list-functions' as X; config from cloud.resource where api.name = 'aws-iam-list-roles' AND json.rule = inlinePolicies[*].policyDocument.Statement[?any(Effect equals Allow and (Action equals "*" or Action contains :* or Action[*] contains :*) and (Resource equals "*" or Resource[*] anyStartWith "*"))] exists as Y; filter '$.X.role equals $.Y.role.arn'; show Y;```
AWS Lambda execution role having overly permissive inline policy This policy identifies AWS Lambda Function execution role having overly permissive inline policy embedded. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Refer to the following URL to give fine-grained and restrictive permissions to IAM Role Inline Policy:\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-inline-policy-console.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.daysBetween($.X.properties.updatedOn,today()) != 8) and ($.X.properties.principalId contains $.Y.id))'; show X;```
llatorre - RoleAssignment v3 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = '((description.listenerDescriptions[*].listener.protocol equals HTTPS or description.listenerDescriptions[*].listener.protocol equals SSL) and (description.listenerDescriptions[*].listener.sslcertificateId is empty or description.listenerDescriptions[*].listener.sslcertificateId does not exist)) or description.listenerDescriptions[*].listener.protocol equals HTTP or description.listenerDescriptions[*].listener.protocol equals TCP'```
AWS Elastic Load Balancer with listener TLS/SSL is not configured This policy identifies AWS Elastic Load Balancers which have non-secure listeners. As Load Balancers will be handling all incoming requests and routing the traffic accordingly. The listeners on the load balancers should always receive traffic over secure channel with a valid SSL certificate configured. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Listeners tab, Click the 'Edit' button under the available listeners\n7. In the Load Balancer Protocol, Select 'HTTPS (Secure HTTP)' or 'SSL (Secure TCP)'\n8. In the SSL Certificate column, click 'Change'\n9. On 'Select Certificate' popup dialog, Choose a certificate from ACM or IAM or upload a new certificate based on requirement and Click on 'Save'\n10. Back to the 'Edit listeners' dialog box, review the secure listeners configuration, then click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'addonsConfig.httpLoadBalancing.disabled equals true'```
GCP Kubernetes Engine Clusters have HTTP load balancing disabled This policy identifies GCP Kubernetes Engine Clusters which have disabled HTTP load balancing. HTTP/HTTPS load balancing provides global load balancing for HTTP/HTTPS requests destined for your instances. Enabling HTTP/HTTPS load balancers will let the Kubernetes Engine to terminate unauthorized HTTP/HTTPS requests and make better context-aware load balancing decisions. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Click on EDIT button\n6. Set 'HTTP load balancing' to Enabled\n7. Click on Save.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = (properties.roleDefinition.properties.type equals CustomRole and (properties.roleDefinition.properties.permissions[?any((actions[*] equals Microsoft.Authorization/locks/delete and actions[*] equals Microsoft.Authorization/locks/read and actions[*] equals Microsoft.Authorization/locks/write) or actions[*] equals Microsoft.Authorization/locks/*)] exists) and (properties.roleDefinition.properties.permissions[?any(notActions[*] equals Microsoft.Authorization/locks/delete or notActions[*] equals Microsoft.Authorization/locks/read or notActions[*] equals Microsoft.Authorization/locks/write or notActions[*] equals Microsoft.Authorization/locks/*)] does not exist)) as X; count(X) less than 1```
Azure Custom Role Administering Resource Locks not assigned This policy identifies Azure Custom Role Administering Resource Locks which are not assigned to any user. Resource locking feature helps to prevent resource being modified or deleted unintentional by any user and prevents damage caused by it. It is recommended to create a custom role for Resource Locks and assign to appropriate user. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Subscriptions', and select the subscription from the list where you want the custom role\n3. Select 'Access control (IAM)'\n\nIf already custom role has been created for resource locks, then go to step 16\n\n4. Click on 'Add' from top tab and select 'Add custom role'\n5. Enter 'Resource Lock Administrator' in the 'Custom role name' field\n6. Enter 'Can Administer Resource Locks' in the 'Description' field\n7. Select 'Start from scratch' for 'Baseline permissions'\n8. Click 'Next'\n9. Select 'Add permissions' from top 'Permissions' tab\n10. Search for 'Microsoft.Authorization/locks' in the 'Search for a permission' box\n11. Select 'Microsoft.Authorization'\n12. Click on 'Permission' checkbox to select all permissions\n13. Click on 'Add'\n14. Click 'Review+create'\n15. Click 'Create' to create custom role for resource locks\n16. In 'Access control (IAM)' select 'Add role assignment'\n17. Select the custom role created above from 'Role' drop down\n18. Select 'User, group, or service principal' from 'Assign access to' drop down\n19. Search for user to assign the custom role in 'Select' field\n20. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-list-attached-user-policies' AND json.rule='attachedPolicies isType Array and not attachedPolicies size == 0'```
AWS IAM policy attached to users This policy identifies IAM policies attached to user. By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups but not users. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Navigate to the 'IAM' service.\n3. Identify the users that were specifically assigned to the reported IAM policy.\n4. If a group with a similar policy already exists, put the user into that group. If such a group does not exist, create a new group with relevant policy and assign the user to the group..
```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains "aws-emr-studio-" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;```
aws emr shadow This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cache-redis' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.enableNonSslPort is true```
Azure Cache for Redis not configured with data in transit encryption This policy identifies Azure Cache for Redis which are not configured with data encryption in transit. Enforcing an SSL connection helps prevent unauthorized users from reading sensitive data that is intercepted as it travels through the network, between clients/applications and cache servers, known as data in transit. It is recommended to configure in-transit encryption for Azure Cache for Redis. Refer to below link for more details: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#access-ports This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure data in-transit for your existing Azure Cache for Redis follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#access-ports\n.
```config from cloud.resource where api.name = 'aws-bedrock-custom-model' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals "null") as Y; filter '$.X.modelKmsKeyArn equals $.Y.key.keyArn'; show X;```
AWS Bedrock Custom model encrypted with Customer Managed Key (CMK) is not enabled for regular rotation This policy identifies AWS Bedrock Custom model encrypted with Customer Managed Key (CMK) is not enabled for regular rotation. AWS KMS (Key Management Service) allows customers to create master keys to encrypt the Custom model. Not enabling regular rotation for AWS Bedrock custom model key rotation failure can result in potential compliance violations. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to enable the automatic rotation of the KMS key used by the AWS Bedrock Custom model\n\n1. Sign in to the AWS Management Console and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.\n2. From the left navigation pane, choose 'Custom models' under 'Foundation models'.\n3. In the 'Models' tab, select the model that is reported.\n4. Under the 'Custom model encryption KMS key' section, click on the KMS key id link.\n 5. Under the 'Key rotation' tab on the navigated KMS key window, click on Edit and enable the Key rotation option under the 'Automatic key rotation' section.\n6. Provide the rotation period as per your business and compliance requirements in the 'Rotation period (in days)' section.\n7. Click on Save..
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-docdb-db-cluster' AND json.rule = Status contains available and DeletionProtection is false```
AWS DocumentDB cluster deletion protection is disabled This policy identifies AWS DocumentDB clusters for which deletion protection is disabled. Enabling deletion protection for DocumentDB clusters prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Select the reported DocumentDB cluster\n5. From top right 'Actions' drop down list select 'Enable deletion protection'\n6. Schedule the modifications and click on 'Modify cluster'\n .
```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-sagemaker-training-job' as Y; filter '$.Y.InputDataConfig[*].DataSource.S3DataSource.bucketName intersects $.X.bucketName'; show X;```
AWS S3 bucket is utilized for AWS Sagemaker training job data This policy identifies the AWS S3 bucket utilized for AWS Sagemaker training job data. S3 buckets store the datasets required for training machine learning models in Sagemaker. Proper configuration and access control are essential to ensure the security and integrity of the training data. Improperly configured S3 buckets used for AWS Sagemaker training data can lead to unauthorized access, data breaches, and potential loss of sensitive information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Sagemaker training data and ensure compliance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the Sagemaker training job, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equals Succeeded and networkRuleSet.defaultAction equal ignore case Allow and properties.privateEndpointConnections[*] is empty```
Azure Storage account is not configured with private endpoint connection This policy identifies Storage accounts that are not configured with a private endpoint connection. Azure Storage account private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Storage account from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure Private Endpoint Connection to Storage account. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer following URL for configuring Private endpoints on your Storage account:\nhttps://learn.microsoft.com/en-us/azure/private-link/create-private-endpoint-portal?#create-a-private-endpoint.
```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[*] is empty or ipPermissionsEgress[*] is empty as Y; filter '$.X.securityGroups[*] contains $.Y.groupId'; show X;```
cloned copy - RLP-93423 - 2 This policy identifies Elastic Load Balancer v2 (ELBv2) load balancers that do not have security groups with a valid inbound or outbound rule. A security group with no inbound/outbound rule will deny all incoming/outgoing requests. ELBv2 security groups should have at least one inbound and outbound rule, ELBv2 with no inbound/outbound permissions will deny all traffic incoming/outgoing to/from any resources configured behind that ELBv2; in other words, the ELBv2 is useless without inbound and outbound permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on each security group, it will open Security Group properties in a new tab in your browser.\n6. For to check the Inbound rule, Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELBv2 functional requirement.\n8. For to check the Outbound rule, Click on the 'Outbound Rules'\n9. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELBv2 functional requirement.\n10. Click on 'Save'.
End of preview. Expand in Data Studio

Dataset Card for Dataset Name

Prisma Cloud curated dataset for known misconfiguration checks across Compliance and Security issues tracked across its customer base.

Dataset Details

Dataset Description

Dataset that provides input on the specific json rules for all known misconfiguration states relevant for cloud security across multiple cloud providers. Useful to help expose data to LLMs to reason and enable free form interaction to understand cloud security better.

  • Curated by: Krishnan Narayan [[email protected]]
  • Funded by: Palo Alto Networks.
  • License: Apache 2.0

Dataset Card Authors [optional]

Krishnan Narayan

Downloads last month
56

Models trained or fine-tuned on knarayan/cloud_posture_checks