query,description "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case ""Ready"" and require_secure_transport.value equal ignore case ""ON"" and (tls_version.value does not equal ignore case ""TLSV1.2"" and tls_version.value does not equal ignore case ""TLSV1.3"" and tls_version.value does not equal ignore case ""TLSV1.2,TLSV1.3"" and tls_version.value does not equal ignore case ""TLSV1.3,TLSV1.2"")```","Azure MySQL database flexible server using insecure TLS version This policy identifies Azure MySQL database flexible servers which are using insecure TLS version. Enforcing TLS connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. As a security best practice, it is recommended to use the latest TLS version for Azure MySQL database flexible server. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to Azure Database for MySQL flexible servers dashboard\n3. Click on the reported MySQL flexible server\n4. Click on 'Server parameters' under 'Settings'\n5. In the search box, type in 'require_secure_transport' and make sure VALUE is set to 'ON' if it is not already set.\n6. In the search box, type in 'tls_version' and Set VALUE to TLSV1.2 or above for tls_version.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-flexible-server' AND json.rule = properties.state equal ignore case Ready and properties.network.publicNetworkAccess equal ignore case Enabled and firewallRules[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists```,"Azure PostgreSQL database flexible server configured with overly permissive network access This policy identifies Azure PostgreSQL database flexible servers that are configured with overly permissive network access. It is highly recommended to create PostgreSQL database flexible server with private access to help secure access to server via VNet Integration or with a Firewall rule, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If PostgreSQL database flexible server was intended to accesed by authorized public IPs. Restrict IP addresses to known list and make sure IP range '+ Add 0.0.0.0 - 255.255.255.255' is not in Firewall rules. \nTo add or to remove IPs refer below URL:\nhttps://docs.microsoft.com/en-gb/azure/postgresql/flexible-server/how-to-manage-firewall-portal#manage-existing-firewall-rules-through-the-azure-portal\n\nTo create new PostgreSQL database flexible server with Private access (VNet Integration), refer below URL:\nhttps://docs.microsoft.com/en-gb/azure/postgresql/flexible-server/quickstart-create-server-portal\n\nNote: Once PostgreSQL database flexible server is created; You can't change the connectivity method after. For example, if you select Public access (allowed IP addresses) when you create the server, you can't change to Private access (VNet Integration) after the server is created.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals Udp or protocol equals Icmp or protocol equals *) and ((destinationPortRange exists and destinationPortRange is not member of (20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500, 5900, *)) or (destinationPortRanges is not empty and destinationPortRanges[*] is not member of (20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500, 5900, *))) )] exists```","Azure Network Security Group allows all traffic on ports which are not commonly used This policy identifies Azure Network Security Group which allow all traffic on ports which are not commonly used. Ports excluded from this policy are 20, 21, 22, 23, 25, 53, 80, 135, 137, 138, 443, 445, 1433, 1434, 3306, 3389, 4333, 5432, 5500 and 5900. As a best practice, restrict ports solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = encryptionConfig does not exist or (encryptionConfig exists and encryptionConfig[*].provider.keyArn does not exist and encryptionConfig[*].resources[*] does not contain secrets)```,"AWS EKS cluster does not have secrets encryption enabled This policy identifies AWS EKS clusters that do not have secrets encryption enabled. AWS EKS cluster secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with direct access to etcd or with API access can retrieve or modify the secrets. Using secrets encryption for your Amazon EKS cluster allows you to protect sensitive information such as passwords and API keys using Kubernetes-native APIs. It is recommended to enable secrets encryption to ensure its security and reduce the risk of unauthorized access or data breaches. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable secrets encryption on existing Azure EKS clusters, follow the below URL:\nhttps://docs.aws.amazon.com/eks/latest/userguide/enable-kms.html." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'monitoringService does not exist or monitoringService equals none'```,"GCP Kubernetes Engine Clusters have Cloud Monitoring disabled This policy identifies Kubernetes Engine Clusters which have disabled Cloud monitoring. Enabling Cloud monitoring will let the Kubernetes Engine to monitor signals and build operations in the clusters. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to 'Kubernetes Engine' (Left Panel)\n3. Select 'Clusters'\n4. From the list of clusters, click on the reported cluster\n5. Under 'Features', click on the edit button (pencil icon) in front of 'Cloud Monitoring'\n6. In the 'Edit Cloud Monitoring' dialog, enable the 'Enable Cloud Monitoring' checkbox\n7. Click on 'Save Changes'." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = instanceId contains ""[RantiAWS"" ```","Chaitu EC2 instance policy This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with ""gke-"") and serviceAccounts[?any( email contains ""compute@developer.gserviceaccount.com"")] exists as Y; filter '$.Y.serviceAccounts[*].email contains $.X.user'; show Y;```","GCP VM instance configured with default service account This policy identifies GCP VM instances configured with the default service account. To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it is recommended to not use the default Compute Engine service account because it has the Editor role on the project. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Log in to GCP Console\n2. Navigate to 'Compute Engine' and click on 'VM instances'\n3. Search for the alerted instance and click on the instance name\n4. To make a change first we have to stop the instance; click on 'STOP' from the top menu\n5. Click on 'EDIT' and Go to section 'Service account' \n6. From the dropdown select a non-default service account \n7. Click on 'Save'\n8. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'type equals data and deleteWithInstance is true'```,"Alibaba Cloud data disk is configured with release disk with instance feature This policy identifies data disks which are configured with release disk with instance feature. As a best practice, disable release disk with instance feature to prevent irreversible data loss from accidental or malicious operations. Note: This attribute applies to data disks only. However, it can only restrict the manual release operation, not the release operation by Alibaba Cloud. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Select the reported data disk\n5. Select More and click on Modify Disk Property\n6. On Modify Disk Property popup window, Uncheck 'Release Disk with Instance' checkbox\n7. Click on 'OK'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","Info of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.encryption.status equal ignore case disabled```,"Azure Container Registry not encrypted with Customer Managed Key (CMK) This policy identifies Azure Container Registries that are not encrypted with Customer-Managed Keys (CMK). By default, Azure Container Registry encrypts data at rest with Microsoft-managed keys. However, for enhanced control, regulatory compliance, and improved security, customer-managed keys enable organizations to encrypt Azure Container Registry data using Azure Key Vault keys that they create, own, and manage. Using CMK ensures that the encryption process aligns with organizational policies, allowing complete control over key lifecycle management, including rotation, access management, and retirement. As a security best practice, it is recommended to encrypt Azure Container Registries with Customer-Managed Keys (CMK). This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: CMK can only be enabled during the creation of a new Container Registry. Ensure the registry is on the Premium service tier, as CMK is only supported at this level.\n\n1. Create a new Container Registry\n2. Navigate to the Encryption tab during the creation process\n3. Select the option to enable Customer-Managed Key\n4. Fill in all other required details to complete the registry setup." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy akceq This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-services-list' AND json.rule = services[?any(name contains containerscanning.googleapis.com and state contains ENABLED)] does not exist```,"GCP GCR Container Vulnerability Scanning is disabled This policy identifies GCP accounts where GCR Container Vulnerability Scanning is not enabled. GCR Container Analysis and other third party products allow images stored in GCR to be scanned for known vulnerabilities. Vulnerabilities in software packages can be exploited by hackers or malicious users to obtain unauthorized access to local cloud resources. It is recommended to enable vulnerability scanning for images stored in Google Container Registry. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. For the reported account, navigate to the GCP service 'Container Registry'(Left Panel)\n3. Select the tab 'Settings'\n4. To enable the vulnerability scanning, click on the 'TURN ON' button.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = state_description equal ignore case active and secret_type equal ignore case username_password and ( rotation.auto_rotate is false or (rotation.unit equal ignore case month and rotation.interval > 3) or (rotation.unit equal ignore case day and rotation.interval > 90))```,"IBM Cloud Secrets Manager user credentials with rotation policy more than 90 days This policy identifies IBM Cloud Secrets Manager user credentials with a rotation policy of more than 90 days. IBM Cloud Secrets Manager allows you to securely store and manage user credentials (username and password) for accessing external services or applications. It provides a centralised way to store secrets, control their lifecycle, set expiration dates, and implement rotation policies. User credentials should be rotated to ensure that data cannot be accessed with an old password, which might have been lost, cracked, or stolen. It is recommended to establish a rotation policy for user credentials, ensuring that they are regularly rotated within a period of less than 90 days. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a rotation policy for user credentials, follow the below steps:\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides under the security section.\n3. Select the secret.\n4. Under the 'Rotation' tab, enable 'Automatic secret rotation'.\n5. Set 'Rotation Interval' to less than 90 days.\n6. Set 'General password settings' according to the requirements.\n6. Click on 'Update'.." ```config from cloud.resource where api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or maxPasswordAge !isType Integer or $.maxPasswordAge > 90 or maxPasswordAge equals 0'```,"AWS IAM password policy does not expire in 90 days This policy identifies the IAM policies which does not have password expiration set to 90 days. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Enable password expiration' and enter a password expiration period.\n4. Click on 'Apply password policy'." ```config from cloud.resource where api.name = 'azure-machine-learning-workspace' AND json.rule = 'properties.provisioningState equal ignore case Succeeded and properties.hbiWorkspace is true and properties.storageAccount exists' as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = 'totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)' as Y; filter '$.X.properties.storageAccount contains $.Y.id'; show Y;```,"Azure Storage Account storing Machine Learning workspace high business impact data is publicly accessible This policy identifies Azure Storage Accounts storing Machine Learning workspace high business impact data that are publicly accessible. Azure Storage account stores machine learning artifacts such as job logs. By default, this storage account is used when you upload data to the workspace. The attacker could exploit publicly accessible storage account to get machine learning workspace high business impact data logs and could breach in to the system by leveraging data exposed. It is recommended to restrict storage account access to only to the machine learning services as per business requirement. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restirct Storage account access, refer the below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/blobs/anonymous-read-access-configure?tabs=portal." "```config from cloud.resource where api.name = 'aws-iam-list-groups' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Action equals * and Resource equals * )] exists as Y; filter ""($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal * ) or ($.X.attachedPolicies[*].policyArn intersects $.Y.policyArn)""; show X;```","AWS IAM Groups with administrator access permissions This policy identifies AWS IAM groups which has administrator access permission set. This would allow all users under this group to have administrative privileges. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to IAM service\n3. Click on Groups\n4. Click on reported IAM group\n5. Under 'Managed Policies' click on 'Detach Policy' which is having excessive permissions and assign a limited permission policy as required for a particular group\nOR\n6. Under 'Inline Policies' click on 'Edit Policy' or 'Remove Policy' and assign a limited permission as required for a particular group." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-certificate' AND json.rule = '_DateTime.ageInDays(validToDate) > -1'```,"AWS Database Migration Service (DMS) has expired certificates This policy identifies expired certificates that are in AWS Database Migration Service (DMS). AWS Database Migration Service (DMS) Certificate service is the preferred tool to provision, manage, and deploy your DMS endpoint certificates. As a best practice, it is recommended to delete expired certificates. For more details: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.SSL.ManagingCerts This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to 'AWS DMS' service\n4. Click on 'Certificates', Choose the reported certificate\n5. Make sure the reported certificate is already out of date from the 'Valid to' field\n6. Click on 'Delete', to delete the expired certificate.." ```config from cloud.resource where api.name = 'gcloud-container-describe-clusters' as X; config from cloud.resource where api.name = 'gcloud-compute-firewall-rules-list' as Y; filter '$.Y.network contains $.X.network and $.Y.sourceRanges contains 0.0.0.0/0 and $.Y.direction contains INGRESS and $.Y.allowed exists'; show Y;```,"GCP Kubernetes Engine Clusters network firewall inbound rule overly permissive to all traffic This policy identifies Firewall rules attached to the cluster network which allows inbound traffic on all protocols from the public internet. Doing so may allow a bad actor to brute force their way into the system and potentially get access to the entire cluster network. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to VPC network (Left Panel)\n3. Select Firewall rules\n4. Click on the reported firewall rule\n5. Click on the 'EDIT' button\n6. Change the 'Source IP ranges' other than '0.0.0.0/0'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = (['properties.webApplicationFirewallConfiguration'] does not exist and ['properties.firewallPolicy'] does not exist) or (['properties.webApplicationFirewallConfiguration'].enabled is false and ['properties.firewallPolicy'] does not exist)```,"Azure Application Gateway does not have the Web application firewall (WAF) enabled This policy identifies Azure Application Gateways that do not have Web application firewall (WAF) enabled. As a best practice, enable WAF to manage and protect your web applications behind the Application Gateway from common exploits and vulnerabilities. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'Application gateways', and select the application gateway you need to modify\n3. Select 'Web Application Firewall' under 'Settings'\n4. Change the 'Tier' to 'WAF' or 'WAF V2' and 'Firewall status' to 'Enabled'\n5. 'Save' your changes." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains CreateVpc and $.X.filterPattern contains DeleteVpc and $.X.filterPattern contains ModifyVpcAttribute and $.X.filterPattern contains AcceptVpcPeeringConnection and $.X.filterPattern contains CreateVpcPeeringConnection and $.X.filterPattern contains DeleteVpcPeeringConnection and $.X.filterPattern contains RejectVpcPeeringConnection and $.X.filterPattern contains AttachClassicLinkVpc and $.X.filterPattern contains DetachClassicLinkVpc and $.X.filterPattern contains DisableVpcClassicLink and $.X.filterPattern contains EnableVpcClassicLink) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for VPC changes This policy identifies the AWS regions which do not have a log metric filter and alarm for VPC changes. Monitoring changes to VPC will help ensure that resources and services are not unintentionally exposed. It is recommended that a metric filter and alarm be established for changes made to VPCs. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and listeners.* is not empty and listeners.*.protocol equal ignore case HTTP and ruleSets.*.items[?any(redirectUri.protocol equal ignore case https)] does not exist```,"OCI Load balancer listener allows connection requests over HTTP This policy identifies Oracle Cloud Infrastructure (OCI) Load Balancer listeners that accept connection requests over HTTP instead of HTTPS or HTTP/2 or TCP protocols. Accepting connections over HTTP can expose data to potential interception and unauthorized access, as HTTP traffic is transmitted in plaintext. OCI Load balancer allow all traffic to be submitted over HTTPS or HTTP/2 or TCP, ensuring all communications are encrypted. These protocols provide encrypted communication channels, safeguarding sensitive information from eavesdropping, tampering, and man-in-the-middle attacks. As a security best practice, it is recommended to configure the listeners to accept connections through HTTPS, HTTP/2, or TCP, thereby enhancing the protection of data in-transit. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate we have 2 options:\n- Update existing Load balancer listener to redirect HTTP traffic to HTTPS by creating Rule set.\n- Delete existing listener associated and Create new listener protocol other than HTTP.\n\nTo redirect Load balancer HTTP traffic to HTTPS, follow:\n1. Log in to OCI console\n2. Open Networking -> Load Balancers\n3. Click on the reported load balancer to open the details page\n4. From the Resources pane, select 'Rule Sets' and then click on 'Create Rule Set' button\n5. Choose name for Rule set and select 'Specify URL Redirect Rules'\n6. In Redirect to section: Set 'Protocol' to HTTPS and 'Port' to 443; choose other parameters as per your requirement.\n7. Click on 'Create'\n\nTo create new listener with protocol other than HTTP, follow:\n1. Log in to OCI console\n2. Open Networking -> Load Balancers\n3. Click on the reported load balancer to open the details page\n4. From the Resources pane, select 'Listeners' and then click on 'Create Listener' button\n5. In Create Listener dailog, Select other parameters and 'Protocol' other than HTTP as per your requirement.\n7. Click on 'Create'\n\nTo delete existing listener, follow:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managinglisteners_topic-Deleting_Listeners.htm." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.adaptiveApplicationControlsMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud adaptive application controls monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have adaptive application controls monitoring set to disabled. Adaptive Application Controls will make sure that only certain applications can run on your VMs in Microsoft Azure. This will prevent any malicious, unwanted, or unsupported software on the VMs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Adaptive application controls for defining safe applications should be enabled on your machines' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals ""ACTIVE"" and shieldedInstanceConfig.enableVtpm is false```","GCP Vertex AI Workbench user-managed notebook has vTPM disabled This policy identifies GCP Vertex AI Workbench user-managed notebooks that have Virtual Trusted Platform Module (vTPM) feature disabled. Virtual Trusted Platform Module (vTPM) validates guest VM pre-boot and boot integrity and offers key generation and protection. The vTPM’s root keys and the keys it generates can’t leave the vTPM, thus gaining enhanced protection from compromised operating systems or highly privileged project admins. It is recommended to enable virtual TPM device on supported virtual machines to facilitate measured Boot and other OS security features that require a TPM. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-vpc-peering-connections' AND json.rule = $.accepterVpcInfo.ownerId does not equal $.requesterVpcInfo.ownerId and $.status.code equals active```,"AWS VPC allows unauthorized peering This policy identifies the VPCs which have unauthorized peering. The recommended best practice is to disallow VPC peering between two VPCs from different AWS accounts, as this potentially enables unauthorized access to private resources. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to AWS VPC console at https://console.aws.amazon.com/vpc/\n3. In the left navigation panel, select Peering Connection\n4. Choose the reported Peering Connection\n5. Click on Actions and select 'Delete VPC Peering Connection'\n6. click on Yes, Delete." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = deleteRetentionPolicy.blob.enabled is false and (kind does not equal ignore case FileStorage)```,"Azure Storage account soft delete is disabled This policy identifies Azure Storage accounts which has soft delete disabled. Azure Storage contains important access logs, financial data, personal and other secret information which is accidentally deleted by a user or application could cause data loss or data unavailability. It is recommended to enable soft delete setting in Azure Storage accounts. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Soft delete on your storage account, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/blobs/soft-delete-blob-enable?tabs=azure-portal#enable-blob-soft-delete." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = publicAccessType does not equal NoPublicAccess```,"OCI Object Storage bucket is publicly accessible This policy identifies the OCI Object Storage buckets that are publicly accessible. Monitoring and alerting on publicly accessible buckets will help in identifying changes to the security posture and thus reduces risk for sensitive data being leaked. It is recommended that no bucket be publicly accessible. This is applicable to oci cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the Edit Visibility\n5. Select Visibility as Private\n6. Click Save Changes." "```config from cloud.resource where cloud.accountgroup = 'Flowlog-sol' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""sol-test"" ```","Sol-test config policy This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = vpcoptions.securityGroupIds[*] exists as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[*].ipv4Ranges[*].cidrIp equals 0.0.0.0/0 or ipPermissions[*].ipv6Ranges[*].cidrIpv6 equals ::/0) as Y; filter '$.X.vpcoptions.securityGroupIds[*] contains $.Y.groupId'; show Y;```,"AWS OpenSearch attached security group overly permissive to all traffic This policy identifies AWS OpenSearch attached Security group that are overly permissive to all traffic. Security group enforces IP-based access policies to OpenSearch. As a best practice, restrict traffic solely from known static IP addresses or CIDR range. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on 'Inbound Rules'\n5. Remove the rule which has the 'Source' value as 0.0.0.0/0 or ::/0." ```config from cloud.resource where api.name = 'aws-ecs-cluster' and json.rule = configuration.executeCommandConfiguration.logConfiguration.s3EncryptionEnabled exists and configuration.executeCommandConfiguration.logConfiguration.s3EncryptionEnabled is false```,"AWS ECS Cluster S3 Log Encryption Disabled This policy alerts you when an AWS ECS cluster is detected with S3 log encryption disabled, potentially exposing sensitive data in your logs. By ensuring that the s3EncryptionEnabled field is set to true, you can enhance the security of your cloud environment by protecting log data from unauthorized access and maintaining compliance with data protection regulations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(3306,3306) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on MySQL DB port (3306) This policy identifies GCP Firewall rules which allow all inbound traffic on MySQL DB port (3306). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the MySQL DB port (3306) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith lambda:)] exists```,"AWS Lambda IAM policy overly permissive to all traffic This policy identifies AWS Lambda IAM policies that are overly permissive to all traffic. It is recommended that the Lambda should be granted access restrictions so that only authorized users and applications have access to the service. For more details: https://docs.aws.amazon.com/lambda/latest/dg/security-iam.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'lambda' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-lambda-get-region-summary' AND json.rule = 'lambdaCodeSize.size > 67500000'```,"AWS Lambda nearing availability code storage limit This policy identifies Lambda nearing availability code storage limit per region. AWS provides a reasonable starting amount of compute and storage resources that you can use to run and store functions. As a best practice, it is recommended to either remove the functions that you no longer in use or reduce the code size of the functions that you do not want to remove. It will also help you avoid unexpected charges on your bill. NOTE: As per https://docs.aws.amazon.com/lambda/latest/dg/limits.html. On the date, Lambda account limit per region is 75 GB. This policy will trigger an alert if Lambda account limit per region reached to 90% (i.e. 67500000 KB) of resource availability limit allocated. If you need more Lambda account code storage size per region, You can contact AWS for a service limit increase. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the AWS Lambda Dashboard\n4. Click on 'Functions', choose each the lambda function \n5. Either remove the function that you no longer use or deduce the code size of the function that if you do not want to remove.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = restrictions.apiTargets does not exist```,"GCP API key not restricting any specific API This policy identifies GCP API keys that are not restricting any specific APIs. API keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services'\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. In the 'Key restrictions' section go to 'API restrictions'.\n5. Select the 'Restrict key' and from the drop-down, choose an API.\n6. Click 'SAVE'.\nNote: Do not set 'API restrictions' to 'Google Cloud APIs', as this option allows access to all services offered by Google cloud.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'databaseVersion contains SQLSERVER and state equals RUNNABLE and (settings.databaseFlags[*].name does not contain ""remote access"" or settings.databaseFlags[?any(name contains ""remote access"" and value contains on)] exists)'```","GCP SQL server instance database flag remote access is not set to off This policy identifies GCP SQL server instances for which database flag remote access is not set to off. The remote access option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target. It is recommended to set the remote access database flag for SQL Server instance to off. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag 'remote access' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Flags and parameters', choose the flag 'remote access' and set the value as 'off'\n6. Click on DONE\n7. Click on SAVE." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and not(customerManagedKey contains cryptoKeys)```,"GCP Memorystore for Redis instance not encrypted with CMEK This policy identifies Memorystore for Redis instances not encrypted with CMEK. GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. By using CMEK with Redis instance, you retain complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in certain industries. It is recommended to encrypt Redis instance data using a Customer-Managed Encryption Key (CMEK). This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Encryption cannot be changed for existing Memorystore for Redis instances. A new Memorystore for Redis instance should be created to use CMEK for encryption.\n\nTo create a new Memorystore for Redis instance with CMEK encryption, please refer to the steps below:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Click on the 'CREATE INSTANCE'\n3. Provide all the other details as per the requirements\n4. Under 'Security', under 'Encryption' select the 'Cloud KMS key' checkbox\n5. Select the KMS key you prefer\n5. Click on the 'CREATE INSTANCE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverBlobAuditingPolicy.properties.state equal ignore case Enabled and serverBlobAuditingPolicy.properties.storageEndpoint is not empty and (serverBlobAuditingPolicy.properties.retentionDays does not equal 0 and serverBlobAuditingPolicy.properties.retentionDays < 91)```,"Azure SQL Server audit log retention is less than 91 days Audit Logs can help you find suspicious events, unusual activity, and trends. Auditing the SQL server, at the server-level, allows you to track all existing and newly created databases on the instance. This policy identifies SQL servers which do not retain audit logs for more than 90 days. As a best practice, configure the audit logs retention time period to be greater than 90 days. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'SQL servers' dashboard\n3. Select the SQL server instance you want to modify\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireNumbers is false or requireNumbers does not exist'```,"AWS IAM password policy does not have a number Checks to ensure that IAM password policy requires a number. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one number'.\n4. Click on 'Apply password policy'." ```config from cloud.resource where api.name = 'ibm-vpc-block-storage-volume' as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```,"API testing This is applicable to ibm cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='log_connections')].properties.value equals OFF or configurations.value[?(@.name=='log_connections')].properties.value equals off""```","Azure PostgreSQL database server with log connections parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log connections. Enabling log_connections helps PostgreSQL Database to log attempted connection to the server, as well as successful completion of client authentication. Log data can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'settings’ block\n5. From the list of parameters find 'log_connections' and set it to 'on'\n6. Click on 'Save' button from top menu to save the change.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and (Action contains SNS:Publish or Action contains sns:Publish) and (Condition does not exist or Condition all empty))] exists```,"AWS SNS topic policy overly permissive for publishing This policy identifies AWS SNS topics that have SNS policy overly permissive for publishing. When a message is published, Amazon SNS attempts to deliver the message to the subscribed endpoints. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#implement-least-privilege-access This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can publish messages to the topic.\n9. Click on 'Save changes'." ```config from cloud.resource where api.name = 'gcloud-domain-users' AND json.rule = isAdmin is false and isEnrolledIn2Sv is false and archived is false and suspended is false```,"GCP Google Workspace User not enrolled with 2-step verification This policy identifies Google Workspace Users who do not have 2-Step Verification enabled. Enabling 2-Step Verification for Google Workspace users significantly enhances account security by adding an additional layer of authentication beyond just passwords. This reduces the risk of unauthorized access, protects sensitive data, and ensures compliance with security best practices. Implementing this measure strengthens overall organizational security and helps safeguard against potential cyber threats. It is recommended to enable 2-Step Verification for all users as it provides increased security for user account settings and resources. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Workspace users should be allowed to turn on 2-Step verification (2SV) before enabling 2SV. Follow the steps mentioned below to allow users to turn on 2SV.\n1. Sign in to Workspace Admin Console with an administrator account. \n2. Go to Menu then 'Security' > 'Authentication' > '2-step verification'.\n3. Check the 'Allow users to turn on 2-Step Verification' box.\n4. Select 'Enforcement' as per need.\n5. Click Save.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/a/answer/9176657\n\n\nTo enable 2-Step Verification for GCP Workspace User accounts, follow the steps below.\n1. Open your Google Account.\n2. In the navigation panel, select 'Security'.\n3. Under 'How you sign in to Google', select '2-Step Verification' > 'Get started'.\n4. Follow the on-screen steps.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/accounts/answer/185839." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-logging-sinks-list' AND json.rule = name contains ""pk""```","pk-gcp-empty This is applicable to gcp cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createidentityprovider and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteidentityprovider and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateidentityprovider) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for Identity Provider changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Identity Provider changes. Monitoring and alerting on changes to Identity Provider will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity Provider. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Identity Provider – Create, Identity Provider - Delete and Identity Provider – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""errorCode="" or $.X.filterPattern contains ""errorCode ="") and ($.X.filterPattern does not contain ""errorCode!="" and $.X.filterPattern does not contain ""errorCode !="") and $.X.filterPattern contains ""UnauthorizedOperation"" and $.X.filterPattern contains ""AccessDenied"") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for unauthorized API calls This policy identifies the AWS regions which do not have a log metric filter and alarm for unauthorized API calls. Monitoring unauthorized API calls will help reveal application errors and may reduce the time to detect malicious activity. It is recommended that a metric filter and alarm be established for unauthorized API calls. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.errorCode = ""*UnauthorizedOperation"") || ($.errorCode = ""AccessDenied*"") }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals ACTIVE and containerDefinitions[?any(logConfiguration.logDriver does not exist)] exists```,"AWS ECS task definition logging configuration disabled This policy identifies AWS ECS task definitions that have logging configuration disabled. AWS ECS logging involves capturing and storing container logs for monitoring, troubleshooting, and analysis purposes within the Amazon ECS environment. Collecting data from task definitions gives visibility, which can aid in debugging processes and determining the source of issues. It is recommended to configure logging for an AWS ECS task definition. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable log configuration for your Amazon ECS task definitions, follow these steps:\n\n1. Sign into the AWS console and navigate to the Amazon ECS console\n2. In the navigation pane, choose 'Task definitions'\n3. Choose the task definition to be updated\n4. Select 'Create new revision', and then click on 'Create new revision'.\n5. On the 'Create new task definition revision' page, select the container with logging configuration disabled\n6. Under the 'Logging' section, enable 'Use log collection'\n7. Select the log driver to be used under the dropdown\n8. At 'awslogs-group', specify the log group that the logdriver sends its log streams to\n9. Specify the remaining configuration as per the requirements\n10. Choose 'Update'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-athena-workgroup' AND json.rule = WorkGroup.State equal ignore case enabled and (WorkGroup.Configuration.ResultConfiguration.EncryptionConfiguration does not exist or (WorkGroup.Configuration.EngineVersion.EffectiveEngineVersion contains Athena and WorkGroup.Configuration.EnforceWorkGroupConfiguration is false))```,"AWS Athena Workgroup data encryption at rest not configured This policy identifies AWS Athena workgroups not configured with data encryption at rest. AWS Athena workgroup enables you to isolate queries for you or your group of users from other queries in the same account, to set the query results location and the encryption configuration. By default, Athena workgroup query run results are not encrypted at rest and client side settings can override the workgroup settings. Encrypting workgroups and preventing overrides from the client side helps in protecting the integrity and confidentiality of the data stored on Athena. It is recommended to set encryption at rest and enable 'override client-side settings' to mitigate the risk of unauthorized access and potential data breaches. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption at rest for the Athena workgroup, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon Athena console.\n2. Under the navigation bar, click on Workgroups.\n3. Select the alerted workgroup. Click on 'Edit'.\n4. For Athena-based engines, under 'Query result configuration', enable 'Encrypt query results'.\n5. Select 'Encryption type' based on the requirements. Make sure to set 'Minimum encryption'.\n6. Under 'Settings', enable 'Override client-side settings'.\n7. For Apache Spark-based engines, under 'Calculation result settings', enable 'Encrypt query results'.\n8. Select 'Encryption type' based on the requirements.\n9. Click on 'Save changes'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kusto-clusters' AND json.rule = properties.state equal ignore case Running and properties.enableDiskEncryption is false```,"Azure Data Explorer cluster disk encryption is disabled This policy identifies Azure Data Explorer clusters in which disk encryption is disabled. Enabling encryption at rest on your cluster provides data protection for stored data. It is recommended to enable disk encryption on Data Explorer clusters. For more details: https://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-disk This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Disk encryption on existing Data Explorer cluster, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-disk." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of (""allAuthenticatedUsers"",""allUsers""))] exists```","GCP Cloud Function is publicly accessible by allUsers or allAuthenticatedUsers This policy identifies GCP Cloud Functions that are publicly accessible by allUsers or allAuthenticatedUsers. This includes both Cloud Functions v1 and Cloud Functions v2. Granting permissions to 'allusers' or 'allAuthenticatedUsers' on any resource in GCP makes the resource public. Public access over cloud functions can lead to unauthorized invocations of the function or leakage of sensitive information such as the function's source code. Following the least privileged access policy, it is recommended to grant access restrictively and avoid granting permissions to allUsers or allAuthenticatedUsers unless absolutely needed. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to service 'Cloud Functions'\n4. Select the required cloud function\n5. Click on 'PERMISSIONS' button\n6. Filter for 'allUsers'\n7. Click on the 'Remove principal' button (bin icon)\n8. Select 'Remove allUsers from all roles on this resource. They may still have access via inherited roles.'\n9. Click 'Remove'\n10. Repeat steps 6-9 for 'allAuthenticatedUsers'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.phone is empty)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```,"Azure Microsoft Defender for Cloud security contact phone number is not set This policy identifies Subscriptions that are not set with security contact phone number for Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender). It is recommended to set security contact phone number to receive notifications when Microsoft Defender for Cloud detects compromised resources. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Use below Azure CLI example command to create new contact with phone number details for Azure Microsoft Defender for Cloud,\n\naz security contact create -n ""default1"" --email 'john@contoso.com' --phone '214275-4038' --alert-notifications 'on' --alerts-admins 'on'\n\nFor more information:\nhttps://docs.microsoft.com/en-us/cli/azure/security/contact?view=azure-cli-latest." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'origins.items[*].s3OriginConfig exists and origins.items[*].s3OriginConfig.originAccessIdentity is empty and origins.items[*].originAccessControlId is empty'```,"AWS Cloudfront Distribution with S3 have Origin Access set to disabled This policy identifies the AWS CloudFront distributions which are utilizing S3 bucket and have Origin Access Disabled. The origin access identity feature should be enabled for all your AWS CloudFront CDN distributions in order to restrict any direct access to your objects through Amazon S3 URLs. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to CloudFront\n3. Choose the reported Distribution\n4. Click on Distribution Settings\n5. Click on 'Origins and Origin Groups\n6. Select the S3 bucket and click on Edit\n7. On the 'Restrict Bucket Access', Select Yes\n8. Click on 'Yes, Edit'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changeroutetablecompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createroutetable and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deleteroutetable and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updateroutetable) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for route tables changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for route tables changes. Monitoring and alerting on changes to route tables will help in identifying changes to traffic flowing to or from Virtual Cloud Networks and Subnets. It is recommended that an Event Rule and Notification be configured to catch changes made to route tables. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Route Table – Change Compartment, Route Table – Create, Route Table - Delete and Route Table – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(20,20)""```","Alibaba Cloud Security group allow internet traffic to FTP-Data port (20) This policy identifies Security groups that allow inbound traffic on FTP-Data port (20) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 20, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." "```config from cloud.resource where api.name = 'ibm-key-protect-registration' as X; config from cloud.resource where api.name = 'ibm-object-storage-bucket' AND json.rule = not( locationConstraint contains ""ams03"" or locationConstraint contains ""mon01"" or locationConstraint contains ""tor01"" or locationConstraint contains ""sjc03"" or locationConstraint contains ""sjc04"" or locationConstraint contains ""sao01"" or locationConstraint contains ""mil01"" or locationConstraint contains ""sng01"" or locationConstraint contains ""che01"" ) as Y; filter 'not($.X.resourceCrn equals $.Y.crn)'; show Y;```","IBM Cloud Object Storage bucket is not encrypted with BYOK (bring your own key) This policy identifies IBM Cloud Storage buckets that are not encrypted with BYOK (Bring your own key). Bring your Own Key (BYOK) allows customers to ensure no one outside their organisation has access to the root key and with the support of BYOK, customers can manage the lifecycle of their customer root keys where they can create, rotate, delete those keys. As a security best practice, it is recommended to use BYOK encryption key management system, which provides a significant level of control on the keys when used for encryption. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: IBM Cloud object storage bucket can be encrypted with Bring your own key (BYOK) only at the time of creation. \n\nPlease create a bucket with bring your own key encryption along with other required configuration as required using the below URL:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-tutorial-kp-encrypt-bucket#kp-encrypt-bucket-create\n\nOnce the new bucket is created, Please transfer existing bucket objects to the new bucket with proper encryption configured using the below URL:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-region-copy\n\nTo delete the alerted bucket, please follow the below instructions:\n1. Log in to the IBM Cloud Console\n2. Click on the 'Navigation Menu' icon and navigate to 'Resource list'. Under the 'Storage' section, select the object storage instance in which the reported bucket resides.\n3. For the alerted bucket, select the 'Delete bucket' option from the kebab menu.\n4. In the 'Delete Bucket' dialog, select 'Delete bucket'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3-access-point' AND json.rule = networkOrigin equal ignore case internet and (publicAccessBlockConfiguration does not exist or (publicAccessBlockConfiguration.blockPublicAcls is false and publicAccessBlockConfiguration.ignorePublicAcls is false and publicAccessBlockConfiguration.blockPublicPolicy is false and publicAccessBlockConfiguration.restrictPublicBuckets is false))```,"AWS S3 access point Block public access setting disabled This policy identifies AWS S3 access points with the block public access setting disabled. AWS S3 Access Point simplifies managing data access by creating unique access control policies for specific applications or users within a S3 bucket. The Amazon S3 Block Public Access feature manages access at the account, bucket, and access point levels. Each level's settings can be configured independently but cannot override more restrictive settings at higher levels. Instead, access point settings complement those at the account and bucket levels. It is recommended to enable the Block public access setting on a S3 access point unless intended for public exposure. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Block public access setting can be enabled at creation time only:\n\n1. Sign in to the AWS Management Console and navigate to the Amazon S3 dashboard\n2. In the left navigation pane, choose 'Access Points'\n3. On the Access Points page, choose 'Create access point'\n4. In the Access point name field, enter the name of the access point\n5. Under 'Block Public Access settings for this Access Point', make sure to select 'Block all public access'\n6. Click on 'Create access point'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.disableKeyBasedMetadataWriteAccess is false```,"Azure Cosmos DB key based authentication is enabled This policy identifies Cosmos DBs that are enabled with key-based authentication. Disabling key-based metadata write access on Azure Cosmos DB prevents any changes to resources from a client connecting using the account keys. It is recommended to disable this feature for organizations who want higher degrees of control and governance for production environments. NOTE: Enabling this feature can have an impact on your application. Make sure that you understand the impact before enabling it. Refer for more details: https://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control#check-list-before-enabling This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following URL to disable key-based metadata write access on your Azure Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/role-based-access-control#prevent-sdk-changes." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' as X; count(X) less than 1```,"AWS CloudTrail is not enabled on the account Checks to ensure that CloudTrail is enabled on the account. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on CloudTrail to get a complete audit trail of activities across various services. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'CloudTrail' service.\n2. Follow the instructions below to enable CloudTrail on the account.\nhttp://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains SQLSERVER and state equals RUNNABLE and (settings.databaseFlags[*].name does not contain 3625 or settings.databaseFlags[?any(name contains 3625 and value contains off)] exists)""```","GCP SQL server instance database flag 3625 (trace flag) is not set to on This policy identifies GCP SQL server instance for which database flag 3625 (trace flag) is not set to on. Trace flag can help prevent the disclosure of sensitive information by masking the parameters of some error messages using '*', for users who are not members of the sysadmin fixed server role. It is recommended to set 3625 (trace flag) database flag for Cloud SQL SQL Server instance to on. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag '3625' from the drop-down menu and set the value as 'On'\nOR\nIf the flag has been set to other than on, Under 'Flags and parameters', choose the flag '3625' and set the value as 'On'\n6. Click on DONE\n7. Click on SAVE." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireLowercaseCharacters does not exist or requireLowercaseCharacters is false'```,"Alibaba Cloud RAM password policy does not have a lowercase character This policy identifies Alibaba Cloud accounts that do not have a lowercase character in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Lowercase Letters'\n6. Click on 'OK'\n7. Click on 'Close'." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and ($.X.filterPattern contains ""eventSource="" or $.X.filterPattern contains ""eventSource ="") and ($.X.filterPattern does not contain ""eventSource!="" and $.X.filterPattern does not contain ""eventSource !="") and $.X.filterPattern contains config.amazonaws.com and $.X.filterPattern contains StopConfigurationRecorder and $.X.filterPattern contains DeleteDeliveryChannel and $.X.filterPattern contains PutDeliveryChannel and $.X.filterPattern contains PutConfigurationRecorder) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for AWS Config configuration changes This policy identifies the AWS regions which do not have a log metric filter and alarm for AWS Config configuration changes. Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to AWS Config's configurations. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where api.name = 'aws-apigateway-get-stages' AND json.rule = webAclArn is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webAclArn'; show X;```,"AWS API Gateway Rest API attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS API Gateway Rest API attached with WAFv2 WebACL which are not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, API Gateway Rest API attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the API Gateway console\n3. Click on the reported API Gateway REST API\n4. In the Stages pane, choose the name of the stage\n5. In the Stage Editor pane, choose the Settings tab\n6. Note down the associated AWS WAF web ACL\n7. Go to the noted WAF web ACL in AWS WAF & Shield Service\n8. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n9. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n10. Click on 'Add rules'." "```config from cloud.resource where api.name = 'aws-dms-replication-task' AND json.rule = ReplicationTaskSettings.Logging.EnableLogging is false or ReplicationTaskSettings.Logging.LogComponents[?any( Id is member of (""TARGET_APPLY"",""TARGET_LOAD"") and Severity is not member of (""LOGGER_SEVERITY_DEFAULT"",""LOGGER_SEVERITY_DEBUG"",""LOGGER_SEVERITY_DETAILED_DEBUG"") )] exists```","AWS DMS replication task for the target database have logging not set to the minimum severity level This policy identifies the DMS replication tasks that are logging isn't enabled or the minimum severity level is less than LOGGER_SEVERITY_DEFAULT for TARGET_APPLY and TARGET_LOAD. Amazon DMS Logging is crucial in DMS replication for monitoring, troubleshooting, auditing, performance analysis, error detection, recovery, and historical reporting. TARGET_APPLY and TARGET_LOAD must be logged because they manage to apply data and DDL changes, as well as loading data into the target database, crucial for maintaining data integrity during migration. The absence of logging for TARGET_APPLY and TARGET_LOAD components hampers monitoring, compliance, auditing, troubleshooting, and accountability efforts during migration. It's recommended to enable logging for AWS DMS replication tasks and set a minimal logging level of DEFAULT for TARGET_APPLY and TARGET_LOAD to ensure that informational messages, warnings, and error messages are written to the logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for Target Apply and Target Load DMS replication tasks log component during migration:\n\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to 'Migration & Transfer' from the 'Services' dropdown and select 'Database Migration Service'\n4. In the navigation panel, under 'Migrate data', click on 'Database migration tasks'\n5. Select the reported replication task and choose 'Modify' from the 'Actions' dropdown on the right\n6. Under the 'Task settings' section, enable 'Turn on CloudWatch logs' under 'Task logs'\n7. Set the log component severity for both 'Target apply' and 'Target Load' components to 'Default' or greater according to your business requirements\n8. Click 'Save' to save the changes." "```config from cloud.resource where api.name = 'oci-networking-networkloadbalancer' and json.rule = lifecycleState equal ignore case ""ACTIVE"" as X; config from cloud.resource where api.name = 'oci-networking-subnet' and json.rule = lifecycleState equal ignore case ""AVAILABLE"" as Y; config from cloud.resource where api.name = 'oci-networking-security-list' AND json.rule = lifecycleState equal ignore case AVAILABLE as Z; filter 'not ($.X.listeners does not equal ""{}"" and ($.X.subnetId contains $.Y.id and $.Y.securityListIds contains $.Z.id and $.Z.ingressSecurityRules is not empty))'; show X;```","OCI Network Load Balancer not configured with inbound rules or listeners This policy identifies Network Load Balancers that are not configured with inbound rules or listeners. A Network Load Balancer's subnet security lists should include ingress rules, and the Network Load Balancer should have at least one listener to handle incoming traffic. Without these configurations, the Network Load Balancer cannot receive and route incoming traffic, rendering it ineffective. As best practice, it is recommended to configure Network Load Balancers with proper inbound rules and listeners. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Network Load Balancers with inbound rules and listeners, refer to the following documentation:\nhttps://docs.cloud.oracle.com/iaas/Content/Security/Reference/configuration_tasks.htm#lb-enable-traffic." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains SQLSERVER and settings.databaseFlags[?(@.name=='contained database authentication')].value equals on""```","GCP SQL Server instance database flag 'contained database authentication' is enabled This policy identifies SQL Server instance database flag 'contained database authentication' is enabled. Most of the threats associated with contained database are related to authentication process. So it is recommended to disable this flag. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on SQL Server instance for which you want to disable the database flag from the list\n4. Click 'Edit'\n5. Go to 'Flags and Parameters' under 'Configuration options' section\n6. Search for the flag 'contained database authentication' and set the value 'off'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-project-info' AND json.rule = 'commonInstanceMetadata.items[*].key does not contain enable-oslogin or (commonInstanceMetadata.items[?any(key contains enable-oslogin and (value contains false or value contains FALSE))] exists)'```,"GCP Projects have OS Login disabled This policy identifies GCP Projects which have OS Login disabled. Enabling OS Login ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user. It facilitates centralized and automated SSH key pair management which is useful in handling cases like a response to compromised SSH key pairs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Navigate to service 'Computer Engine' (Left Panel)\n3. For setting project-level OS login configuration, go to the 'Metadata' section under 'Settings'(from Left Panel)\n4. Click on the 'Edit' button\n5. If the metadata for 'enable-oslogin' is not set, click on '+Add item' and add metadata entry key as 'enable-oslogin' and the value as 'TRUE'/'true'\n6. Click on 'Save' to apply the changes\n7. You need to validate if any overriding instance-level metadata is set,\n8. Go to the tab 'VM instances', under section 'Virtual machines', \n9. For every instance, click on 'Edit'\n10. Under Custom metadata, remove any entry with key 'enable-oslogin' and the value 'FALSE'/'false'\n11. At the bottom of the 'VM instance details' page, click 'Save' to apply your changes to the instance.." ```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowSharedKeyAccess is true```,"Azure Storage account configured with Shared Key authorization This policy identifies Azure Storage accounts configured with Shared Key authorization. Azure Storage accounts authorized with Shared Key authorization via Shared Access Signature (SAS) tokens pose a security risk, as they allow sharing information with external unidentified identities. It is highly recommended to disable Shared Key authorization and Use Azure AD authorization as it provides superior security and ease of use over Shared Key. For more details: https://learn.microsoft.com/en-us/azure/storage/common/shared-key-authorization-prevent This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To prevent Shared Key authorization for an Azure Storage account, follow bellow URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/shared-key-authorization-prevent." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="" ) and ( $.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="" ) and $.X.filter contains ""gce_route"" and ( $.X.filter contains ""protoPayload.methodName:"" or $.X.filter contains ""protoPayload.methodName :"" ) and ( $.X.filter does not contain ""protoPayload.methodName!:"" and $.X.filter does not contain ""protoPayload.methodName !:"" ) and $.X.filter contains ""compute.routes.delete"" and $.X.filter contains ""compute.routes.insert""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for VPC network route delete and insert This policy identifies GCP accounts which do not have a log metric filter and alert for VPC network route delete and insert events. Monitoring network routes deletion and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the deletion and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gce_route"" AND (protoPayload.methodName:""compute.routes.delete"" OR protoPayload.methodName:""compute.routes.insert"")\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-client-vpn-endpoint' AND json.rule = status.code equal ignore case available and connectionLogOptions.Enabled is false```,"AWS EC2 Client VPN endpoints client connection logging disabled This policy identifies AWS EC2 client VPN endpoints with client connection logging disabled. AWS Client VPN endpoints enable remote clients to securely connect to resources in the Virtual Private Cloud (VPC). Connection logs enable you to track user behaviour on the VPN endpoint and gain visibility. It is recommended to enable connection logging for AWS EC2 client VPN endpoints. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable connection logging for a new Client VPN endpoint, follow these steps:\n\n1. Sign into the AWS console and navigate to the Amazon VPC console\n2. In the navigation pane, choose 'Client VPN Endpoints'\n3. Select the 'Client VPN endpoint', choose 'Actions', and then choose 'Modify Client VPN endpoint'\n4. Under 'Connection logging', turn on 'Enable log details on client connections'\n5. For 'CloudWatch Logs log group name', choose the name of the CloudWatch Logs log group\n6. (Optional) For 'CloudWatch Logs log stream name', choose the name of the CloudWatch Logs log stream\n7. Choose 'Modify Client VPN endpoint'." "```config from cloud.resource where api.name = 'alibaba-cloud-ecs-disk' AND json.rule = category contains ""foo"" ```","bobby 3/28 This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals ""8.8.8.8/32"" and direction equals ""outbound"" and ( protocol equals ""all"" or ( protocol equals ""tcp"" and ( port_max greater than 53 and port_min less than 53 ) or ( port_max equals 53 and port_min equals 53 ))))] exists```","IBM Cloud Virtual Private Cloud (VPC) security group contains outbound rules that specify source IP 8.8.8.8/32 to DNS port This policy identifies IBM Virtual Private Cloud (VPC) security groups that contain outbound rules that specify a source IP 8.8.8.8/32 to DNS port. Doing so, may allow sensitive data from the protected resource being leaked to Google, which uses data for indexing and monetizing. As a best practice, restrict DNS port (53) solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Outbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Destination type' as 'Any' and 'Value' as 53 (or range containing 53)\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'viewerCertificate.certificateSource contains cloudfront'```,"AWS CloudFront web distribution with default SSL certificate This policy identifies CloudFront web distributions which have a default SSL certificate to access CloudFront content. It is a best practice to use custom SSL Certificate to access CloudFront content. It gives you full control over the content data. custom SSL certificates also allow your users to access your content by using an alternate domain name. You can use a certificate stored in AWS Certificate Manager (ACM) or you can use a certificate stored in IAM. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On the 'General' tab, Click on the 'Edit' button\n6. On 'Edit Distribution' page set 'SSL Certificate' to 'Custom SSL Certificate (example.com):', Select a certificate or type your certificate ARN in the field and other parameters as per your requirement.\n7. Click on 'Yes, Edit'." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(3306,3306) or destinationPortRanges[*] contains _Port.inRange(3306,3306) ))] exists```","Azure Network Security Group allows all traffic on MySQL (TCP Port 3306) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on MySQL (TCP Port 3306). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MySQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = '(_DateTime.ageInDays($.notAfter) > -1) and status equals EXPIRED'```,"AWS Certificate Manager (ACM) has expired certificates This policy identifies expired certificates which are in AWS Certificate Manager. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. This policy generates alerts if there are any expired ACM managed certificates. As a best practice, it is recommended to delete expired certificates. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Verify that the 'Status' column shows 'Expired' for the reported certificate\n6. Under 'Actions' drop-down click on 'Delete'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'ipAllocationPolicy.useIpAliases does not exist or ipAllocationPolicy.useIpAliases equals false'```,"GCP Kubernetes Engine Clusters have Alias IP disabled This policy identifies Kubernetes Engine Clusters which have disabled Alias IP. Alias IP allows the networking layer to perform anti-spoofing checks to ensure that egress traffic is not sent with arbitrary source IPs. By enabling Alias IPs, Kubernetes Engine clusters can allocate IP addresses from a CIDR block known to Google Cloud Platform. This makes your cluster more scalable and allows your cluster to better interact with other GCP products and entities. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Alias IP can be enabled only at the time of creation of clusters. So to fix this alert, create a new cluster with Alias IP enabled and then migrate all required cluster data or containers from the reported cluster to this new cluster.\nTo create the cluster with Alias IP enabled, perform following steps:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n5. Click on 'CREATE CLUSTER' button\n6. Configure your cluster and click on 'More'\n7. From the 'VPC-native (using alias IP)' drop-down menu, select 'Enabled'. New menu items appear\n8. From 'Automatically create secondary ranges' drop-down menu, select 'Enabled' \n9. Configure the 'Network', 'Node subnet', 'Node address range', 'Container address range', and 'Service address range' as needed\n10. Click on Create." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and (Action contains SNS:Subscribe or Action contains sns:Subscribe or Action contains SNS:Receive or Action contains sns:Receive) and Condition does not exist)] exists```,"AWS SNS topic policy overly permissive for subscription This policy identifies AWS SNS topics that have SNS policy overly permissive for the subscription. When you subscribe an endpoint to a topic, the endpoint begins to receive messages published to the associated topic. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#implement-least-privilege-access This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can subscribe to this topic.\n9. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-describe-vpc-endpoints' AND json.rule = vpcEndpointType equals Gateway and policyDocument.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and Action contains * and Condition does not exist)] exists```,"AWS VPC gateway endpoint policy is overly permissive This policy identifies AWS VPC gateway endpoints that have a VPC endpoint (VPCE) policy that is overly permissive. When the Principal element value is set to '*' within the access policy, the VPC gateway endpoint allows full access to any IAM user or service within the VPC using credentials from any AWS accounts. It is highly recommended to have the least privileged VPCE policy to protect the data leakage and unauthorized access. For more details: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the VPC dashboard\n4. Go to 'Endpoints', from the left panel VIRTUAL PRIVATE CLOUD section\n5. Select the reported VPC endpoint\n6. On the 'Actions' drop-down button, click on the 'Manage policy'\n8. On the 'Edit Policy' page, Choose 'Custom' policy\na. Then add policy, without the 'Everyone' grantee (i.e. '*' or 'AWS': '*') from the Principal element value with an AWS account ID (e.g. '123456789'), an AWS account ARN (e.g. 'arn:aws:iam::123456789:root') or an IAM user ARN (e.g. 'arn:aws:iam::123456789:user/vpce-admin').\nb. Add a Condition clause to the policy statement to filter the endpoint access to specific entities.\n9. Click on 'Save'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals arbitrary and state_description equal ignore case active and (_DateTime.ageInDays(last_update_date) > 90)'```,"IBM Cloud Secrets Manager arbitrary secrets have aged more than 90 days without being rotated This policy identifies IBM Cloud Secrets Manager arbitrary secrets that have aged more than 90 days without being rotated. Arbitrary secrets should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all arbitrary secrets are regularly rotated. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides, under security section.\n3. Select the secret and click on 'Actions' dropdown.\n4. Select 'Rotate' from the dropdown.\n5. In the 'Rotate secret' screen, provide data as required.\n6. Click on 'Rotate'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.ipRangeFilter is not empty and properties.ipRangeFilter startsWith 0.0.0.0 or properties.ipRangeFilter endsWith 0.0.0.0```,"Azure Cosmos DB allows traffic from public Azure datacenters This policy identifies Cosmos DBs that allow traffic from public Azure datacenters. If you enable this option, the IP address 0.0.0.0 is added to the list of allowed IP addresses. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. So it is recommended not to select the ‘Accept connections from within public Azure datacenters’ option for your Cosmos DB. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Azure Cosmos DB service\n3. Select the reported Azure Cosmos DB account\n4. Click on 'Firewall and virtual networks' under 'Settings'\n5. Unselect 'Accept connections from within public Azure datacenters' option under 'Exceptions'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.keyPolicy.keyExpirationPeriodInDays does not exist```,"Azure Storage account key expiration policy is not configured This policy identifies Azure Storage accounts for which key expiration policy is not configured. A key expiration policy enables you to set a reminder for the rotation of the account access keys, so that you can monitor your storage accounts for compliance to ensure that the account access keys are rotated regularly. As a best practice, it is recommended to set key expiration policy for Azure Storage account keys. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Security + networking', select 'Access keys'\n5. Select the 'Set rotation reminder' button. If the Set rotation reminder button is grayed out, you will need to rotate each of your keys manually.\n6. In Set a reminder to rotate access keys, select the 'Enable key rotation reminders' checkbox and set a frequency for the reminder.\n7. Click on 'Save'\n\nNOTE: Before you can create a key expiration policy, you may need to rotate each of your account access keys at least once.." ```config from cloud.resource where api.name = 'gcloud-compute-backend-bucket' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' not (Y.name intersects X.bucketName) '; show X;```,"GCP backend bucket having dangling GCP Storage bucket This policy identifies the GCP backend buckets having dangling GCP Storage bucket. A GCP backend bucket is usually used by GCP Load Balancers for serving static content. Such setups can also have DNS pointing to the load balancer's IP for easy human access. A GCP backend bucket pointing to a GCP storage bucket that doesn't exist in the same project is a potential risk of bucket takeover as well as at risk of subdomain takeover. An attacker can exploit such a setup by creating a GCP Storage bucket with the same name in their own GCP project, thus receiving all requests redirected to this backend bucket from the load balancer to an attacker-controlled GCP Storage bucket. This attacker-controlled bucket can be used to serve malicious content to perform phishing attacks, spread malware, or engage in other illegal activities. As a best practice, it is recommended to review and protect GCP storage buckets bound to a GCP backend bucket from accidental deletion. Delete the GCP backend bucket if it points to a non-existent GCP storage bucket. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To mitigate the risk, either delete the GCP backend bucket or create a GCP Storage bucket with the name in your account to which the GCP backend bucket points.\n\n\n# Delete GCP backend bucket\nTo delete a GCP backend bucket, it should be disassociated from all GCL Load balancers first. The following steps might be followed:\n\n1. Identify the backend bucket pointing to a non-existing GCP Storage bucket.\n2. Login to GCP Portal\n3. Go to Network services -> Load Balancing\n4. Click on ""Backends""\n5. Note the names of load balancers that are using the GCP backend bucket. Names are shown under the ""Load balancer"" column\n6. Click on the ""LOAD BALANCERS"" tab\n7. Click on the load balancer name for each load balancer identified in step 4 and repeat the following steps:\n i. After opening the load balancer page, click on ""EDIT""\n ii. Go to Backend configuration\n iii. Under the ""Backend buckets"" section, remove the GCP backend bucket by clicking ""cross"" icon in front of it\n iv. Go to Routing rules. Edit the rules as desired. Remove any rules pointing to the reported backend bucket.\n v. Click Update\n8. Click and switch back to the ""Backends"" tab\n9. Select the GCP backend bucket, the option to delete now should be available.\n10. Click ""Delete"" -> ""DELETE""\n\n\n# Create a new GCP Storage bucket\nRefer to the following link on how to create a new GCP Storage bucket and create a new bucket with the same name as the one the GCP backend bucket is pointing to:\n\nhttps://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-snapshot' AND json.rule = encryption equal ignore case provider_managed```,"IBM Cloud Block Storage Snapshot for VPC is not encrypted with customer managed keys This policy identifies IBM Cloud Block Storage Snapshots for VPC, which are not encrypted with customer managed keys. Using customer managed keys will increase significant control where keys are managed by customers. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A Block storage snapshot can be encrypted with customer managed keys only at the time of creation of a virtual server instance. \nPlease create a virtual service instance with boot/data disk from the reported snapshot using below URL. Please make sure to select customer managed encryption for data/boot storage volume:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-vol-ui\n\nCreate a snapshot for above created storage disk volume following below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nOnce a new snapshot got created, delete the virtual server instance to which the created storage volume/snapshot got attached:\nhttps://cloud.ibm.com/docs/hp-virtual-servers?topic=hp-virtual-servers-remove_vs#delete_vs." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = osType does not exist and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of (""EncryptionAtRestWithCustomerKey"", ""EncryptionAtRestWithPlatformAndCustomerKeys"",""EncryptionAtRestWithPlatformKey"")```","Azure VM data disk is not configured with any encryption This policy identifies VM data disks that are not configured with any encryption. Azure encrypts data disks that are not configured with any encryption. Azure offers Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK] by default for managed disks. It is recommended to enable default encryption or you may optionally choose to use a customer-managed key to protect from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Disks'\n3. Select the reported OS disk you want to modify\n4. Select 'Encryption' under 'Settings'\n5. Select 'Encryption Type' according to your encryption requirement.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and (identity.type does not exist or identity.principalId is empty)```,"Azure Logic app is not configured with managed identity This policy identifies Azure Logic apps that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Including credentials in code heightens the risk in the event of a security breach and increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. As a security best practice, it is recommended to set up managed identity rather than embedding credentials within the code. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.databaseEncryption.state equals DECRYPTED```,"GCP Kubernetes cluster Application-layer Secrets not encrypted Application-layer Secrets Encryption provides an additional layer of security for sensitive data, such as Secrets, stored in etcd. Using this functionality, you can use a key, that you manage in Cloud KMS, to encrypt data at the application layer. This protects against attackers who gain access to an offline copy of etcd. This policy checks your cluster for the Application-layer Secrets Encryption security feature and alerts if it is not enabled. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: At this time, you cannot enable Application-layer Secrets Encryption for an existing cluster.\n\nCreating a new cluster with Application-layer Secrets Encryption.\n\n1. Go to the Kubernetes clusters page in the GCP Console and select CREATE CLUSTER.\n2. Click Advanced options.\n3. Check Enable Application-layer Secrets Encryption.\n4. Select a customer-managed key from the drop down menu, or create a new KMS key.\n5. When finished configuring options for the cluster, click Create.." ```config from cloud.resource where cloud.type = 'AWS' and api.name = 'aws-ec2-describe-subnets' AND json.rule = 'mapPublicIpOnLaunch is true'```,"Copy of AWS VPC subnets should not allow automatic public IP assignment This policy identifies VPC subnets which allow automatic public IP assignment. VPC subnet is a part of the VPC having its own rules for traffic. Assigning the Public IP to the subnet automatically (on launch) can accidentally expose the instances within this subnet to internet and should be edited to 'No' post creation of the Subnet. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to the 'VPC' service.\n4. In the navigation pane, click on 'Subnets'.\n5. Select the identified Subnet and choose the option 'Modify auto-assign IP settings' under the Subnet Actions.\n6. Disable the 'Auto-Assign IP' option and save it.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = ((properties.publicNetworkAccess equals Enabled and properties.networkRuleSet does not exist) or (properties.publicNetworkAccess equals Enabled and properties.networkRuleSet exists and properties.networkRuleSet.defaultAction equals Allow))```,"Azure Container registries Public access to All networks is enabled This policy identifies Azure Container registries which has Public access to All networks enabled. Azure ACR is used to store Docker container images which might contain sensitive information. It is highly recommended to restrict public access from allow access from Selected networks or make it Private by disabling the Public access. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Container Registries'\n3. Select the container registry you need to modify\n4. Select 'Networking' under 'Settings'\n5. Click on 'Public access' tab and select 'Selected networks' and provide the IPV4 address for which you want access to ACR or select 'Disabled' to disable Public access\n6. Click on 'Save'\n\nNote: 'Public access' setting can be toggled to 'Selected networks' or 'Disabled' state only with Premium SKU. For Standard and Basic SKU Public access setting cannot be updated and these resource will reamin accesiable to public.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = firewallRules.value[*].properties.startIpAddress equals ""0.0.0.0"" or firewallRules.value[*].properties.endIpAddress equals ""0.0.0.0""```","EIP-CSE-IACOHP-AzurePostgreSQL-NetworkAccessibility-eca1500-5 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-disk-list' AND json.rule = dataAccessAuthMode does not equal ignore case AzureActiveDirectory and managedBy contains virtualMachines and provisioningState equal ignore case Succeeded```,"Azure disk data access authentication mode not enabled This policy identifies if the Data Access Authentication Mode for Azure disks is disabled. This mode is crucial for controlling how users upload or export Virtual Machine Disks by requiring an Azure Entra ID role to authorize such operations. Without enabling this mode, users can create SAS tokens to export disks without stringent identity-based restrictions. This increases the risk of unauthorized disk access or data exposure, especially in environments handling sensitive data. Enabling the Data Access Authentication Mode ensures that only users with the appropriate Data Operator for Managed Disk role in Azure Entra ID can export or manage disks. This enhances data security by preventing unauthorized disk exports and restricting access to secure download URLs. As a security best practice, it is recommended to enable data access authentication mode for Azure disks. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To enable data access authentication mode on disks attached to a VM, you must first stop the VM and detach the disk.\n\n1. Log in to Azure Portal and search for 'Disks'\n2. Select 'Disks'\n3. Select the reported disk\n4. Under 'Settings' select 'Disk Export'\n5. Check the 'Enable data access authentication mode' under 'Data access authentication mode'\n6. Click on 'Save'\n7. Re-attach the disk to the virtual machine, and restart it." "```config from cloud.resource where api.name = 'aws-lambda-list-functions' as X; config from cloud.resource where api.name = 'aws-iam-list-roles' AND json.rule = inlinePolicies[*].policyDocument.Statement[?any(Effect equals Allow and (Action equals ""*"" or Action contains :* or Action[*] contains :*) and (Resource equals ""*"" or Resource[*] anyStartWith ""*""))] exists as Y; filter '$.X.role equals $.Y.role.arn'; show Y;```","AWS Lambda execution role having overly permissive inline policy This policy identifies AWS Lambda Function execution role having overly permissive inline policy embedded. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Refer to the following URL to give fine-grained and restrictive permissions to IAM Role Inline Policy:\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-inline-policy-console." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.daysBetween($.X.properties.updatedOn,today()) != 8) and ($.X.properties.principalId contains $.Y.id))'; show X;```","llatorre - RoleAssignment v3 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = '((description.listenerDescriptions[*].listener.protocol equals HTTPS or description.listenerDescriptions[*].listener.protocol equals SSL) and (description.listenerDescriptions[*].listener.sslcertificateId is empty or description.listenerDescriptions[*].listener.sslcertificateId does not exist)) or description.listenerDescriptions[*].listener.protocol equals HTTP or description.listenerDescriptions[*].listener.protocol equals TCP'```,"AWS Elastic Load Balancer with listener TLS/SSL is not configured This policy identifies AWS Elastic Load Balancers which have non-secure listeners. As Load Balancers will be handling all incoming requests and routing the traffic accordingly. The listeners on the load balancers should always receive traffic over secure channel with a valid SSL certificate configured. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Listeners tab, Click the 'Edit' button under the available listeners\n7. In the Load Balancer Protocol, Select 'HTTPS (Secure HTTP)' or 'SSL (Secure TCP)'\n8. In the SSL Certificate column, click 'Change'\n9. On 'Select Certificate' popup dialog, Choose a certificate from ACM or IAM or upload a new certificate based on requirement and Click on 'Save'\n10. Back to the 'Edit listeners' dialog box, review the secure listeners configuration, then click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'addonsConfig.httpLoadBalancing.disabled equals true'```,"GCP Kubernetes Engine Clusters have HTTP load balancing disabled This policy identifies GCP Kubernetes Engine Clusters which have disabled HTTP load balancing. HTTP/HTTPS load balancing provides global load balancing for HTTP/HTTPS requests destined for your instances. Enabling HTTP/HTTPS load balancers will let the Kubernetes Engine to terminate unauthorized HTTP/HTTPS requests and make better context-aware load balancing decisions. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Click on EDIT button\n6. Set 'HTTP load balancing' to Enabled\n7. Click on Save." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = (properties.roleDefinition.properties.type equals CustomRole and (properties.roleDefinition.properties.permissions[?any((actions[*] equals Microsoft.Authorization/locks/delete and actions[*] equals Microsoft.Authorization/locks/read and actions[*] equals Microsoft.Authorization/locks/write) or actions[*] equals Microsoft.Authorization/locks/*)] exists) and (properties.roleDefinition.properties.permissions[?any(notActions[*] equals Microsoft.Authorization/locks/delete or notActions[*] equals Microsoft.Authorization/locks/read or notActions[*] equals Microsoft.Authorization/locks/write or notActions[*] equals Microsoft.Authorization/locks/*)] does not exist)) as X; count(X) less than 1```,"Azure Custom Role Administering Resource Locks not assigned This policy identifies Azure Custom Role Administering Resource Locks which are not assigned to any user. Resource locking feature helps to prevent resource being modified or deleted unintentional by any user and prevents damage caused by it. It is recommended to create a custom role for Resource Locks and assign to appropriate user. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Subscriptions', and select the subscription from the list where you want the custom role\n3. Select 'Access control (IAM)'\n\nIf already custom role has been created for resource locks, then go to step 16\n\n4. Click on 'Add' from top tab and select 'Add custom role'\n5. Enter 'Resource Lock Administrator' in the 'Custom role name' field\n6. Enter 'Can Administer Resource Locks' in the 'Description' field\n7. Select 'Start from scratch' for 'Baseline permissions'\n8. Click 'Next'\n9. Select 'Add permissions' from top 'Permissions' tab\n10. Search for 'Microsoft.Authorization/locks' in the 'Search for a permission' box\n11. Select 'Microsoft.Authorization'\n12. Click on 'Permission' checkbox to select all permissions\n13. Click on 'Add'\n14. Click 'Review+create'\n15. Click 'Create' to create custom role for resource locks\n16. In 'Access control (IAM)' select 'Add role assignment'\n17. Select the custom role created above from 'Role' drop down\n18. Select 'User, group, or service principal' from 'Assign access to' drop down\n19. Search for user to assign the custom role in 'Select' field\n20. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-list-attached-user-policies' AND json.rule='attachedPolicies isType Array and not attachedPolicies size == 0'```,"AWS IAM policy attached to users This policy identifies IAM policies attached to user. By default, IAM users, groups, and roles have no access to AWS resources. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that IAM policies be applied directly to groups but not users. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Navigate to the 'IAM' service.\n3. Identify the users that were specifically assigned to the reported IAM policy.\n4. If a group with a similar policy already exists, put the user into that group. If such a group does not exist, create a new group with relevant policy and assign the user to the group.." "```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains ""aws-emr-studio-"" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;```","aws emr shadow This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cache-redis' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.enableNonSslPort is true```,"Azure Cache for Redis not configured with data in transit encryption This policy identifies Azure Cache for Redis which are not configured with data encryption in transit. Enforcing an SSL connection helps prevent unauthorized users from reading sensitive data that is intercepted as it travels through the network, between clients/applications and cache servers, known as data in transit. It is recommended to configure in-transit encryption for Azure Cache for Redis. Refer to below link for more details: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#access-ports This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure data in-transit for your existing Azure Cache for Redis follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-configure#access-ports\n." "```config from cloud.resource where api.name = 'aws-bedrock-custom-model' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals ""null"") as Y; filter '$.X.modelKmsKeyArn equals $.Y.key.keyArn'; show X;```","AWS Bedrock Custom model encrypted with Customer Managed Key (CMK) is not enabled for regular rotation This policy identifies AWS Bedrock Custom model encrypted with Customer Managed Key (CMK) is not enabled for regular rotation. AWS KMS (Key Management Service) allows customers to create master keys to encrypt the Custom model. Not enabling regular rotation for AWS Bedrock custom model key rotation failure can result in potential compliance violations. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to enable the automatic rotation of the KMS key used by the AWS Bedrock Custom model\n\n1. Sign in to the AWS Management Console and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.\n2. From the left navigation pane, choose 'Custom models' under 'Foundation models'.\n3. In the 'Models' tab, select the model that is reported.\n4. Under the 'Custom model encryption KMS key' section, click on the KMS key id link.\n 5. Under the 'Key rotation' tab on the navigated KMS key window, click on Edit and enable the Key rotation option under the 'Automatic key rotation' section.\n6. Provide the rotation period as per your business and compliance requirements in the 'Rotation period (in days)' section.\n7. Click on Save.." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-docdb-db-cluster' AND json.rule = Status contains available and DeletionProtection is false```,"AWS DocumentDB cluster deletion protection is disabled This policy identifies AWS DocumentDB clusters for which deletion protection is disabled. Enabling deletion protection for DocumentDB clusters prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Select the reported DocumentDB cluster\n5. From top right 'Actions' drop down list select 'Enable deletion protection'\n6. Schedule the modifications and click on 'Modify cluster'\n ." ```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-sagemaker-training-job' as Y; filter '$.Y.InputDataConfig[*].DataSource.S3DataSource.bucketName intersects $.X.bucketName'; show X;```,"AWS S3 bucket is utilized for AWS Sagemaker training job data This policy identifies the AWS S3 bucket utilized for AWS Sagemaker training job data. S3 buckets store the datasets required for training machine learning models in Sagemaker. Proper configuration and access control are essential to ensure the security and integrity of the training data. Improperly configured S3 buckets used for AWS Sagemaker training data can lead to unauthorized access, data breaches, and potential loss of sensitive information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Sagemaker training data and ensure compliance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the Sagemaker training job, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equals Succeeded and networkRuleSet.defaultAction equal ignore case Allow and properties.privateEndpointConnections[*] is empty```,"Azure Storage account is not configured with private endpoint connection This policy identifies Storage accounts that are not configured with a private endpoint connection. Azure Storage account private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Storage account from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure Private Endpoint Connection to Storage account. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer following URL for configuring Private endpoints on your Storage account:\nhttps://learn.microsoft.com/en-us/azure/private-link/create-private-endpoint-portal?#create-a-private-endpoint." ```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[*] is empty or ipPermissionsEgress[*] is empty as Y; filter '$.X.securityGroups[*] contains $.Y.groupId'; show X;```,"cloned copy - RLP-93423 - 2 This policy identifies Elastic Load Balancer v2 (ELBv2) load balancers that do not have security groups with a valid inbound or outbound rule. A security group with no inbound/outbound rule will deny all incoming/outgoing requests. ELBv2 security groups should have at least one inbound and outbound rule, ELBv2 with no inbound/outbound permissions will deny all traffic incoming/outgoing to/from any resources configured behind that ELBv2; in other words, the ELBv2 is useless without inbound and outbound permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on each security group, it will open Security Group properties in a new tab in your browser.\n6. For to check the Inbound rule, Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELBv2 functional requirement.\n8. For to check the Outbound rule, Click on the 'Outbound Rules'\n9. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELBv2 functional requirement.\n10. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule =status equals ""RUNNING"" and resourceLabels.goog-composer-version does not start with ""composer-1"" and ((workloadIdentityConfig[*] does not exist) or (workloadIdentityConfig[*] exists and (nodePools[?any(config.workloadMetadataConfig does not contain GKE_METADATA)] exists)))```","GCP Kubernetes Engine cluster workload identity is disabled This policy identifies GCP Kubernetes Engine clusters for which workload identity is disabled. Manual approaches for authenticating Kubernetes workloads violates the principle of least privilege on a multi-tenanted node when one pod needs to have access to a service, but every other pod on the node that uses the service account does not. Enabling Workload Identity manages the distribution and rotation of Service account keys for the workloads to use. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to service 'Kubernetes Engine'(Left Panel)\n3. Select the reported cluster from the available list\n4. Under section 'Security', click on edit icon for 'Workload Identity'\n5. Click on the checkbox 'Enable Workload Identity'\n6. Ensure that the Workload Identity Namespace is set to the namespace of the GCP\nproject containing the cluster, e.g: $PROJECT_ID.svc.id.goog\n7. Click on 'SAVE CHANGES'\n8. After enabling, go to tab 'NODES'\n9. To investigate each node pool, Click on 'Edit', In section 'Security', select the 'Enable GKE Metadata Server' checkbox\n10. Click on 'SAVE'." ```config from cloud.resource where api.name = 'azure-machine-learning-datastores' AND json.rule = properties.datastoreType equal ignore case AzureBlob as X; config from cloud.resource where api.name = 'azure-storage-account-list' as Y; filter ' $.X.properties.accountName equal ignore case $.Y.name ' ; show Y;```,"Azure Blob Storage utilized for Azure Machine Learning training job data This policy identifies Azure Blob Storage accounts used for storing data utilized in Azure Machine Learning training jobs. This policy provides visibility into storage utilization for Machine Learning workloads but does not indicate a security or compliance risk. Azure Blob Storage serves as a robust storage solution for large-scale Machine Learning training data. This policy emphasizes the importance of securing stored data by employing encryption and additional security parameters like firewalls, private endpoints, and access policies to safeguard sensitive information. As a security best practice, it is recommended to properly configure Azure Blob Storage utilized in Azure Machine Learning training jobs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: For configuring Azure Blob Storage used in Azure Machine Learning training jobs, refer to the following link for Blob storage security recommendations:\nhttps://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = ""clusterGroupsDetails[*].parameters[?(@.parameterName=='require_ssl')].parameterValue is false""```","AWS Redshift does not have require_ssl configured This policy identifies Redshift databases in which data connection to and from is occurring on an insecure channel. SSL connections ensures the security of the data in transit. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS and navigate to the 'Amazon Redshift' service.\n2. Expand the identified 'Redshift' cluster and make a note of the 'Cluster Parameter Group'\n3. In the navigation panel, click on the 'Parameter group'.\n4. Select the identified 'Parameter Group' and click on 'Edit Parameters'.\n5. Review the require_ssl flag. Update the parameter 'require_ssl' to true and save it.\nNote: If the current parameter group is a Default parameter group, it cannot be edited. You will need to create a new parameter group and point it to an affected cluster.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' and api.name = 'alibaba-cloud-vpc' AND json.rule = vpcFlowLogs[*].flowLogId does not exist and status equal ignore case Available```,"Alibaba Cloud VPC flow log not enabled This policy identifies Virtual Private Clouds (VPCs) where flow logs are not enabled. VPC flow logs capture information about the traffic entering and exiting network interfaces in the VPC. Without VPC flow logs, there’s limited visibility into network traffic, making it challenging to detect and investigate suspicious activities, potential data breaches, or security policy violations. Enabling VPC flow logs enhances network monitoring, improves threat detection, and supports compliance requirements. As a security best practice, it is recommended to enable VPC flow logs. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Navigate to VPC console\n3. Under 'O&M and Monitoring', click on 'Flow Log'\n4. Create and configure a new flow log for the reported VPC, specifying the required traffic filters and log storage destination\n5. Enable and save the configuration." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = gceSetup.metadata.notebook-upgrade-schedule does not exist```,"GCP Vertex AI Workbench Instance auto-upgrade is disabled This policy identifies GCP Vertex AI Workbench Instances that have auto-upgrade disabled. Auto-upgrading Google Cloud Vertex environments ensures timely security updates, bug fixes, and compatibility with APIs and libraries. It reduces security risks associated with outdated software, enhances stability, and enables access to new features and optimizations. It is recommended to enable auto-upgrade to minimize maintenance overhead and mitigate security risks. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Select 'INSTANCES' tab\n5. Click on the reported notebook\n6. Go to 'SYSTEM' tab\n7. Enable 'Environment auto-upgrade'\n8. Configure upgrade schedule as required\n9. Click 'SUBMIT'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = 'webhooks[*] contains config and webhooks[*].config.serviceUri starts with http://'```,"Azure ACR HTTPS not enabled for webhook Ensure you send container registry webhooks only to a HTTPS endpoint. This policy checks your container registry webhooks and alerts if it finds a URI with HTTP. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Update your container registry webhook URI to use HTTPS.\n\n1. Sign in to the Azure portal.\n2. Navigate to the container registry in which you want to modify the webhook.\n3. Under Services, select Webhooks.\n4. Select your existing webhook.\n5. Near the top of the next window pane, select Configure.\n6. Under Service URI in the next window, modify your URI to use https:// and click Save.." "```config from cloud.resource where api.name = 'aws-ec2-client-vpn-endpoint' and json.rule = authorizationRules[*].accessAll exists and authorizationRules[*].accessAll equals ""True"" ```","Detect Unrestricted Access to EC2 Client VPN Endpoints This policy helps you identify AWS EC2 Client VPN endpoints that have been configured to allow access for all clients, which could potentially expose your VPN to unauthorized users. By detecting such configurations, the policy enables you to take necessary actions to secure your VPN endpoints, ensuring that only authorized clients can access your cloud resources and maintain a strong security posture in your public cloud environment. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy sizbn This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-ec2-describe-images' AND json.rule = image.platform contains windows and image.imageId contains ami-1e542176```,"AWS Amazon Machine Image (AMI) infected with mining malware This policy identifies Amazon Machine Images (AMIs) that are infected with mining malware. As per research, AWS Community AMI Windows 2008 hosted by an unverified vendor containing malicious code running an unidentified crypto (Monero) miner. It is recommended to delete such AMIs to protect from malicious activity and attack blast. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MALWARE']. Mitigation of this issue can be done as follows: To delete reported AMI follow below mentioned URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-organization-asset-group-member' as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/editor or roles[*] contains roles/owner or roles[*] contains roles/appengine.* or roles[*] contains roles/browser or roles[*] contains roles/compute.networkAdmin or roles[*] contains roles/cloudtpu.serviceAgent or roles[*] contains roles/composer.serviceAgent or roles[*] contains roles/composer.ServiceAgentV2Ext or roles[*] contains roles/container.serviceAgent or roles[*] contains roles/dataflow.serviceAgent)' as Y; filter '($.X.groupKey.id contains $.Y.user)'; show Y;```,"pcsup-13966-policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(22,22) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on SSH port (22) This policy identifies GCP Firewall rules which allow all inbound traffic on SSH port (22). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SSH port (22) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)```,"Azure storage account has a blob container with public access This policy identifies blob containers within an Azure storage account that allow anonymous/public access ('CONTAINER' or 'BLOB'). As a best practice, do not allow anonymous/public access to blob containers unless you have a very good reason. Instead, you should consider using a shared access signature token for providing controlled and time-limited access to blob containers. 'Public access level' allows you to grant anonymous/public read access to a container and the blobs within Azure blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage Accounts' dashboard\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the blob container you need to modify\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy sklde This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-lambda-list-functions' AND json.rule = policy.Statement[?any(Effect equals Allow and Principal equals ""*"" and Condition does not exist and (Action equals ""*"" or Action equals lambda:*))] exists```","AWS Lambda Function resource-based policy is overly permissive This policy identifies Lambda Functions that have overly permissive resource-based policy. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access. For more details: https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: To modify permission from AWS Lambda Function resource-based policy\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to Configuration tab\n7. Select Permissions\n8. Scroll to the \""Resource-based policy\"" area\n9. For each policy statement, use fine-grained and restrictive permissions instead of using wildcards (Lambda:* and Resource:*) OR add in appropriate conditions with least privilege access.\n10. Click on \""Edit\"" button to modify the statement\n11. When you finish configuring the statement, choose 'Save'.\n\nTo remove permission from AWS Lambda Function resource-based policy\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to Configuration tab\n7. Select Permissions\n8. Scroll to the \""Resource-based policy\"" area\n9. For each policy statement, use fine-grained and restrictive permissions instead of using wildcards (Lambda:* and Resource:*) OR add in appropriate conditions with least privilege access.\n10. Click on \""Delete\"" button to modify the statement\n11. In Delete statement dialog box, click on \""Delete\"" button.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-event-subscriptions' AND json.rule = 'sourceType equals db-instance and ((status does not equal active or enabled is false) or (status equals active and enabled is true and (sourceIdsList is not empty or eventCategoriesList is not empty)))'```,"AWS RDS Event subscription All event categories and All instances disabled for DB instance This policy identifies AWS RDS event subscriptions for DB instance which has 'All event categories' and 'All instances' is disabled. As a best practice enabling 'All event categories' for 'All instances' helps to get notified when an event occurs for a DB instance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS Dashboard\n4. Click on 'Event subscriptions' (Left Panel)\n5. Choose the reported Event subscription\n6. Click on 'Edit'\n7. On 'Edit event subscription' page, Under 'Details' section; Select 'Yes' for 'Enabled' and Make sure you have subscribed your DB to 'All instances' and 'All event categories'\n8. Click on 'Edit'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = binaryAuthorization.evaluationMode does not exist or binaryAuthorization.evaluationMode equal ignore case EVALUATION_MODE_UNSPECIFIED or binaryAuthorization.evaluationMode equal ignore case DISABLED```,"asasas23 This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'AWS' AND finding.type = 'AWS GuardDuty IAM' AND finding.name = 'Impact:IAMUser/AnomalousBehavior'```,"GuardDuty IAM Impact: AnomalousBehavior This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy nrnqu This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = purpose equal ignore case ""ENCRYPT_DECRYPT"" and primary.state equals ""ENABLED"" and (rotationPeriod does not exist or rotationPeriod greater than 7776000)```","GCP KMS Symmetric key not rotating in every 90 days This policy identifies GCP KMS Symmetric keys that are not rotating every 90 days. A key is used to protect some corpus of data. A collection of files could be encrypted with the same key and people with decrypt permissions on that key would be able to decrypt those files. It's recommended to make sure the 'rotation period' is set to a specific time to ensure data cannot be accessed through the old key. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure automatic rotation for GCP KMS Symmetric keys, please refer to the URL given below and configure ""Rotation period"" to less than or equal to 90 days:\nhttps://cloud.google.com/kms/docs/rotating-keys#automatic." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-datacatalog-catalogs' AND json.rule = lifecycleState equal ignore case ACTIVE and (attachedCatalogPrivateEndpoints is empty or attachedCatalogPrivateEndpoints does not exist)```,"OCI Data Catalog configured with overly permissive network access This policy identifies Data Catalogs configured with overly permissive network access. The OCI Data Catalog service provides a centralized repository to manage and govern data assets, including their metadata. When network access settings are too permissive, it can expose sensitive metadata to unauthorized users or malicious actors, potentially leading to data breaches and compliance issues. As a best practice, it is recommended to configure the Data catalog with private endpoints; so that the Data catalog is accessible only to restricted entities. This is applicable to oci cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To configure private endpoint to your Data catalog, follow the below URL:\nhttps://docs.oracle.com/en-us/iaas/data-catalog/using/private-network.htm." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = 'policyName equals AWSSupportAccess and policyArn contains arn:aws:iam::aws:policy/AWSSupportAccess and (isAttached is false or (isAttached is true and entities.policyRoles[*].roleId is empty))'```,"AWS IAM support access policy is not associated to any role This policy identifies IAM policies with support role access which are not attached to any role for an account. AWS provides a support centre that can be used for incident notification and response, as well as technical support and customer services. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2.Go to service IAM under Services panel.\n3.From left panel click on 'Policies'\n4.Search for the existence of a support policy 'AWSSupportAccess'\n5.Create a IAM role \n6.Attach 'AWSSupportAccess' managed policy to the created IAM role." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and settings.databaseFlags[*].name contains ""user options""'```","GCP SQL server instance database flag user options is set This policy identifies GCP SQL server instances fo which database flag user options is set. The user options option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. A user can override these defaults by using the SET statement. It is recommended that, user options database flag for SQL Server instance should not be configured. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on the reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', go to the flag 'user options' and click on delete icon\n6. Click on SAVE \n7. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'Policies[*] size > 0'```,"Alibaba Cloud RAM policy attached to users This policy identifies Resource Access Management (RAM) policies that are attached to users. By default, RAM users, groups, and roles have no access to Alibaba Cloud resources. RAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended that RAM policies be applied directly to groups and roles but not users. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Click on the reported RAM user\n5. Under the 'Permissions' tab, In 'Individual' sub-tab\n6. Click on 'Remove Permission' for user reported,\n7. On 'Remove Permission' popup window, Click on 'OK'\n\nIf a group with a similar policy already exists, put the user in that group. If such a group does not exist, create a new group with relevant policy and assign the user to the group.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vpn-connection-list' AND json.rule = 'ipsecPolicies is empty and connectionType does not equal ExpressRoute'```,"Azure VPN is not configured with cryptographic algorithm This policy identifies Azure VPNs which are not configured with cryptographic algorithm. Azure VPN gateways to use a custom IPsec/IKE policy with specific cryptographic algorithms and key strengths, rather than the Azure default policy sets. IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. If customers do not request a specific combination of cryptographic algorithms and parameters, Azure VPN gateways use a set of default proposals. Typically due to compliance or security requirements, you can now configure your Azure VPN gateways to use a custom IPsec/IKE policy with specific cryptographic algorithms and key strengths, rather than the Azure default policy sets. It is thus recommended to use custom policy sets and choose strong cryptography. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow Microsoft Azure documentation and setup your respective VPN connections using strong recommended cryptographic requirements.\nFMI: https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-compliance-crypto#cryptographic-requirements." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(10250,10250) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp or IPProtocol contains ""all"")))] exists as X; config from cloud.resource where api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING as Y; filter '$.X.network contains $.Y.networkConfig.network' ; show X;```","GCP Firewall rule exposes GKE clusters by allowing all traffic on port 10250 This policy identifies GCP Firewall rule allowing all traffic on port 10250 which allows GKE full node access. The port 10250 on the kubelet is used by the kube-apiserver (running on hosts labelled as Orchestration Plane) for exec and logs. As per security best practice, port 10250 should not be exposed to the public. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: As port 10250 exposes sensitive information of GKE pod configuration it is recommended to disable this firewall rule. \nOtherwise, remove the overly permissive source IPs following the below steps,\n\n1. Login to GCP Console\n2. Navigate to 'VPC Network'(Left Panel)\n3. Go to the 'Firewall' section (Left Panel)\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE```,"PCSUP-16458-CLI-Test This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running and sku.tier does not equal Basic and properties.networkProfile.serviceRuntimeSubnetId does not exist```,"Azure Spring Cloud service is not configured with virtual network This policy identifies Azure Spring Cloud services that are not configured with a virtual network. Spring Cloud configured with a virtual network isolates apps and service runtime from the internet on your corporate network and provides control over inbound and outbound network communications for Azure Spring Cloud. As best security practice, It is recommended to deploy Spring Cloud service in a virtual network. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You can select your Azure virtual network only when you create a new Azure Spring Cloud service instance. You cannot change to use another virtual network after Azure Spring Cloud has been created. \nTo resolve this alert create a new Spring Cloud service configuring virtual network, migrate all data to newly created Spring Cloud service and then delete the reported Spring Cloud service.\n\nTo create a new Spring Cloud service with virtual network, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/spring-cloud/how-to-deploy-in-azure-virtual-network?tabs=azure-portal \n\nNOTE: Azure Virtual network feature is not available to Basic tier Spring Cloud services.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = 'instanceNetworkType does not equal vpc or vpcAttributes is empty'```,"Alibaba Cloud ECS instance is not using VPC network This policy identifies ECS instances which are still using the ECS classic network instead of the VPC network that enables you to leverage enhanced infrastructure security controls. Note: If you purchased an ECS instance after 17:00 (UTC+8) on June 14, 2017, you cannot choose the classic network type. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You can select the VPC network only when you create a new ECS instance. So to fix this alert, create a new ECS instance with VPC network and then migrate all required ECS instance data from the reported ECS instance to this newly created ECS instance.\n\nTo set up the new ECS instance with VPC network, perform the following:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. On the Instances list page, click Create Instance.\n5. Complete the Basic Configurations\n6. Click 'Next: Networking', Select a 'Network Type' as 'VPC'. Select the desired VPC and a VSwitch.\n7. Complete the System Configurations, Grouping and Preview the configurations.\n8. Click on 'Create Order'\n\nTo delete reported ECS instance, perform the following:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Click on the reported ECS instance\n5. Click on 'Stop', It will be auto-released.." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy lgwpn This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.items[?any(key contains ""enable-oslogin"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = (metadata.items[?any(key exists and key contains ""enable-oslogin"" and (value contains ""False"" or value contains ""N"" or value contains ""No"" or value contains ""false"" or value contains ""FALSE"" or value contains ""0""))] exists and name does not start with ""gke-"" and status equals RUNNING) as Y;filter'$.Y.zone contains $.X.name';show Y;```","GCP VM instance OS login overrides Project metadata OS login configuration This policy identifies GCP VM instances where OS login configuration is disabled and overriding enabled Project OS login configuration. Enabling OS Login ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user. It facilitates centralized and automated SSH key pair management which is useful in handling cases like a response to compromised SSH key pairs. Note: Enabling OS Login on instances disables metadata-based SSH key configurations on those instances. Disabling OS Login restores SSH keys that you have configured in a project or instance metadata. Reference: https://cloud.google.com/compute/docs/instances/managing-instance-access This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to the VM instances\n4. Select the alerted VM instance\n5. Click on the 'EDIT' button\n6. Go to 'Custom metadata'\n7. Remove the metadata entry where the key is 'enable-oslogin' and the value is 'FALSE' or 'false' or 0.(For more information on adding boolean values, refer: https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#boolean)\n8. Click on 'Save' to apply the changes." "```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains ""aws-emr-studio-"" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""aws-emr-studio-"" as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;```","AWS EMR Studio using the shadow resource bucket for workspace storage This policy identifies that the AWS EMR Studio using the bucket for workspace storage is not managed from the current account. This could potentially be using the shadow resource bucket for workspace storage. AWS EMR enables data processing and analysis using big data frameworks like Hadoop, Spark, and Hive. To create an EMR Studio, the EMR service automatically generates an S3 bucket. This S3 bucket follows the naming pattern 'aws-emr-studio-{Account-ID}-{Region}'. An attacker can create an unclaimed bucket with this predictable name and wait for the victim to deploy a new EMR Studio in a new region. This can result in multiple attacks, including cross-site scripting (XSS) when the user opens the compromised notebook in EMR Studio. It is recommended to verify the expected bucket owner and update the AWS EMR storage location and enforce the aws: ResourceAccount condition in the policy of the service role used by the AWS EMR to check that the AWS account ID of the S3 bucket used by AWS EMR Studio according to your business requirements. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update an EMR Studio with the new workspace storage, Follow the below actions:\n\n1. Sign in to the AWS Management Console\n2. Move the required script to a new S3 bucket as per your requirements.\n3. Open the Amazon EMR console at https://console.aws.amazon.com/emr.\n4. Under EMR Studio on the left navigation, choose Studios.\n5. Select the reported studio from the Studios list and Click the 'Edit' button on the right corner to edit the Studio details.\n6. Verify that the 'Workspace storage' is authorized and managed according to your business requirements. \n7. On the Edit studio page, Update 'Workspace storage' by selecting 'Browse S3', and select the 'Encrypt Workspace files with your own AWS KMS key' as per your organisation's requirements.\n8. Click 'Save Changes'.." "```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals ""RUNNING"" as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-workbench-instance' as Y; filter ' $.Y.labels.resource-name equals $.X.labels.resource-name '; show X;```","GCP VM instance used by Vertex AI Workbench Instance This policy identifies GCP VM instances used by Vertex AI Workbench. Vertex AI Workbench relies on GCP Compute Engine VM instances for backend processing. The selection of the appropriate VM instance type, size, and configuration directly impacts the performance and security of the Workbench. Proper configuration of these VM instances is critical to ensuring the security of the associated Vertex AI environment. It is recommended to regularly identify and assess the VM instances supporting Vertex AI Workbench to maintain a strong security posture and ensure compliance with best practices. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Review and validate the GCP VM instances used by Vertex AI Workbench Instances. Verify the VM instance is configured as per organizational needs.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_validate_compliance_hyperion_policy_ss_finding_1 Description-d84c12b2-384e-429e-967a-2e9ea515846d This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.allowedToCreateSecurityGroups is true ```,"Azure user not restricted to create Microsoft Entra Security Group This policy identifies instances in the Microsoft Entra ID configuration where security group creation is not restricted to administrators only. When the ability to create security groups is enabled, all users in the directory can create new groups and add members to them. Unless there is a specific business need for this broad access, it is best to limit the creation of security groups to administrators only. As a best practice, it is recommended to restrict the ability to create Microsoft Entra Security Groups to administrators only. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under 'Manage' select 'Groups'\n4. Under 'Settings' select 'General'\n5. Under 'Security Groups' section, set 'Users can create security groups in Azure portals, API or PowerShell' to No\n6. Select 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-route-tables' AND json.rule = ""routes[*].vpcPeeringConnectionId exists and routes[?(@.destinationCidrBlock=='0.0.0.0/0' || @.destinationIpv6CidrBlock == '::/0')].vpcPeeringConnectionId starts with pcx""```","AWS route table with VPC peering overly permissive to all traffic This policy identifies VPC route tables with VPC peering connection which are overly permissive to all traffic. Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'VPC' dashboard from 'Services' dropdown\n4. From left menu, select 'Route Tables'\n5. Click on the alerted route table\n6. From top click on 'Action' button\n7. From the Action menu dropdown, select 'Edit routes'\n8. From the list of destination remove the extra permissive destination by clicking the cross symbol available for that destination\n9. Add a destination with 'least access'\n10. Click on 'Save Routes'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_error_verbosity or settings.databaseFlags[?any(name contains log_error_verbosity and value contains verbose)] exists)""```","GCP PostgreSQL instance database flag log_error_verbosity is not set to default or stricter This policy identifies PostgreSQL database instances in which database flag log_error_verbosity is not set to default. The flag log_error_verbosity controls the amount of detail written in the server log for each message that is logged. Valid values are TERSE, DEFAULT, and VERBOSE. It is recommended to set log_error_verbosity to default or terse. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_error_verbosity' from the drop-down menu and set the value as 'default' or 'terse'\nOR\nIf the flag has been set to other than default or terse, Under 'Customize your instance', In 'Flags' section choose the flag 'log_error_verbosity' and set the value as 'default' or 'terse'\n6. Click on 'DONE' and then 'SAVE'." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-table-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```,"Azure Storage account diagnostic setting for table is disabled This policy identifies Azure Storage account tables that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account tables. These logs provide valuable insights into the operations, performance, and security of the storage account tables. As a best practice, it is recommended to enable diagnostic logs on all storage account tables. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the table resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'." "```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = (profile equals MODERN or profile equals CUSTOM) and minTlsVersion does not equal ""TLS_1_2"" as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter ""$.X.selfLink contains $.Y.sslPolicy""; show Y;```","Check BC This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = ""policies[*].policyAttributeDescriptions[?(@.attributeName=='DHE-RSA-AES128-SHA'|| @.attributeName=='DHE-DSS-AES128-SHA' || @.attributeName=='CAMELLIA128-SHA' || @.attributeName=='EDH-RSA-DES-CBC3-SHA' || @.attributeName=='DES-CBC3-SHA' || @.attributeName=='ECDHE-RSA-RC4-SHA' || @.attributeName=='RC4-SHA' || @.attributeName=='ECDHE-ECDSA-RC4-SHA' || @.attributeName=='DHE-DSS-AES256-GCM-SHA384' || @.attributeName=='DHE-RSA-AES256-GCM-SHA384' || @.attributeName=='DHE-RSA-AES256-SHA256' || @.attributeName=='DHE-DSS-AES256-SHA256' || @.attributeName=='DHE-RSA-AES256-SHA' || @.attributeName=='DHE-DSS-AES256-SHA' || @.attributeName=='DHE-RSA-CAMELLIA256-SHA' || @.attributeName=='DHE-DSS-CAMELLIA256-SHA' || @.attributeName=='CAMELLIA256-SHA' || @.attributeName=='EDH-DSS-DES-CBC3-SHA' || @.attributeName=='DHE-DSS-AES128-GCM-SHA256' || @.attributeName=='DHE-RSA-AES128-GCM-SHA256' || @.attributeName=='DHE-RSA-AES128-SHA256' || @.attributeName=='DHE-DSS-AES128-SHA256' || @.attributeName=='DHE-RSA-CAMELLIA128-SHA' || @.attributeName=='DHE-DSS-CAMELLIA128-SHA' || @.attributeName=='ADH-AES128-GCM-SHA256' || @.attributeName=='ADH-AES128-SHA' || @.attributeName=='ADH-AES128-SHA256' || @.attributeName=='ADH-AES256-GCM-SHA384' || @.attributeName=='ADH-AES256-SHA' || @.attributeName=='ADH-AES256-SHA256' || @.attributeName=='ADH-CAMELLIA128-SHA' || @.attributeName=='ADH-CAMELLIA256-SHA' || @.attributeName=='ADH-DES-CBC3-SHA' || @.attributeName=='ADH-DES-CBC-SHA' || @.attributeName=='ADH-RC4-MD5' || @.attributeName=='ADH-SEED-SHA' || @.attributeName=='DES-CBC-SHA' || @.attributeName=='DHE-DSS-SEED-SHA' || @.attributeName=='DHE-RSA-SEED-SHA' || @.attributeName=='EDH-DSS-DES-CBC-SHA' || @.attributeName=='EDH-RSA-DES-CBC-SHA' || @.attributeName=='IDEA-CBC-SHA' || @.attributeName=='RC4-MD5' || @.attributeName=='SEED-SHA' || @.attributeName=='DES-CBC3-MD5' || @.attributeName=='DES-CBC-MD5' || @.attributeName=='RC2-CBC-MD5' || @.attributeName=='PSK-AES256-CBC-SHA' || @.attributeName=='PSK-3DES-EDE-CBC-SHA' || @.attributeName=='KRB5-DES-CBC3-SHA' || @.attributeName=='KRB5-DES-CBC3-MD5' || @.attributeName=='PSK-AES128-CBC-SHA' || @.attributeName=='PSK-RC4-SHA' || @.attributeName=='KRB5-RC4-SHA' || @.attributeName=='KRB5-RC4-MD5' || @.attributeName=='KRB5-DES-CBC-SHA' || @.attributeName=='KRB5-DES-CBC-MD5' || @.attributeName=='EXP-EDH-RSA-DES-CBC-SHA' || @.attributeName=='EXP-EDH-DSS-DES-CBC-SHA' || @.attributeName=='EXP-ADH-DES-CBC-SHA' || @.attributeName=='EXP-DES-CBC-SHA' || @.attributeName=='EXP-RC2-CBC-MD5' || @.attributeName=='EXP-KRB5-RC2-CBC-SHA' || @.attributeName=='EXP-KRB5-DES-CBC-SHA' || @.attributeName=='EXP-KRB5-RC2-CBC-MD5' || @.attributeName=='EXP-KRB5-DES-CBC-MD5' || @.attributeName=='EXP-ADH-RC4-MD5' || @.attributeName=='EXP-RC4-MD5' || @.attributeName=='EXP-KRB5-RC4-SHA' || @.attributeName=='EXP-KRB5-RC4-MD5')].attributeValue equals true""```","AWS Elastic Load Balancer (Classic) SSL negotiation policy configured with insecure ciphers This policy identifies Elastic Load Balancers (Classic) which are configured with SSL negotiation policy containing insecure ciphers. An SSL cipher is an encryption algorithm that uses encryption keys to create a coded message. SSL protocols use several SSL ciphers to encrypt data over the Internet. As many of the other ciphers are not secure, it is recommended to use only the ciphers recommended in the following AWS link: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-ssl-security-policy.html. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On 'Listeners' tab, Change the cipher for the 'HTTPS/SSL' rule\nFor a 'Predefined Security Policy', change 'Cipher' to 'ELBSecurityPolicy-TLS‌-1-2-2017-01' or latest\nFor a 'Custom Security Policy', select from the secure ciphers as recommended in the below AWS link:\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-ssl-security-policy.html\n6. 'Save' your changes." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-bigquery-dataset-list' AND json.rule = defaultEncryptionConfiguration.kmsKeyName does not exist```,"GCP BigQuery Dataset not configured with default CMEK This policy identifies BigQuery Datasets that are not configured with default CMEK. Setting a Default Customer-managed encryption key (CMEK) for a data set ensures any tables created in the future will use the specified CMEK if none other is provided. It is recommended to configure all BigQuery Datasets with default CMEK. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure default Customer-managed encryption key (CMEK), use following command for ""bq"" utility\nbq update --default_kms_key= \n\nPlease refer to URL mentioned below for more details on the bq update command:\nhttps://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_update." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(445,445) or destinationPortRanges[*] contains _Port.inRange(445,445) ))] exists```","Azure Network Security Group allows all traffic on Windows SMB (TCP Port 445) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows SMB TCP port 445. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = 'policy.Statement[?any((Condition.StringNotEquals contains aws:SourceVpce and Effect equals Deny and (Action contains s3:* or Action[*] contains s3:*)) or (Condition.StringEquals contains aws:SourceVpce and Effect equals Allow and (Action contains s3:* or Action[*] contains s3:*)))] exists'```,"AWS S3 bucket having policy overly permissive to VPC endpoints This policy identifies S3 buckets that have the bucket policy overly permissive to VPC endpoints. It is recommended to follow the principle of least privileges ensuring that the VPC endpoints have only necessary permissions instead of full permission on S3 operations. NOTE: When applying the Amazon S3 bucket policies for VPC endpoints described in this section, you might block your access to the bucket without intending to do so. Bucket permissions that are intended to specifically limit bucket access to connections originating from your VPC endpoint can block all connections to the bucket. The policy might disable console access to the specified bucket because console requests don't originate from the specified VPC endpoint. So remediation should be done very carefully. For details refer https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, click on the 'Bucket Policy'\n5. Update the S3 bucket policy for the VPC endpoint so that it has only required permissions instead of full S3 permission.\nRefer for example: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and policies.default.Statement[?any(Principal.AWS contains * and Effect equal ignore case allow and Condition does not exist)] exists```,"AWS KMS Key policy overly permissive This policy identifies KMS Keys that have a key policy overly permissive. Key policies are the primary way to control access to customer master keys (CMKs) in AWS KMS. It is recommended to follow the principle of least privilege ensuring that KMS key policy does not have all the permissions to be able to complete a malicious action. For more details: https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html#overview-policy-elements This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop-down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Click on the 'Key policy' tab\n7. Click on 'Edit',\nReplace the 'Everyone' grantee (i.e. '*') from the Principal element value with an AWS account ID or an AWS account ARN.\nOR\nAdd a Condition clause to the existing policy statement so that the KMS key is restricted.\n8. Click on 'Save Changes'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isDotnetcoreVersionLatest exists and config.isDotnetcoreVersionLatest equals false'```,"Azure App Service Web app doesn't use latest .Net Core version This policy identifies App Service Web apps that are not configured with latest .Net Core version. Periodically, newer versions are released for .Net Core software either due to security flaws or to include additional functionality. It is recommended to use the latest .Net version for web apps in order to take advantage of security fixes, if any. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to .NET and Minor version is set to latest version.\n6. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals ""null"") and keyMetadata.customerMasterKeySpec equals SYMMETRIC_DEFAULT```","AWS Customer Master Key (CMK) rotation is not enabled This policy identifies Customer Master Keys (CMKs) that are not enabled with key rotation. AWS KMS (Key Management Service) allows customers to create master keys to encrypt sensitive data in different services. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop-down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Under the 'Key Rotation' tab, Enable 'Automatically rotate this CMK every year'\n7. Click on Save." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Azure Database for MariaDB not configured with private endpoint This policy identifies Azure MariaDB database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure MariaDB database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure private endpoint for MariaDB, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/mariadb/howto-configure-privatelink-portal." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and ['sqlServer'].['properties.privateEndpointConnections'] is empty and firewallRules[*] is empty```,"Azure SQL server public network access setting is enabled This policy identifies Azure SQL servers which have public network access setting enabled. Publicly accessible SQL servers are vulnerable to external threats with risk of unauthorized access or may remotely exploit any vulnerabilities. It is recommended to configure the SQL servers with IP-based strict server-level firewall rules or virtual-network rules or private endpoints so that servers are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure IP-based strict server-level firewall rules on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/firewall-create-server-level-portal-quickstart\n\nTo configure virtual-network rules on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/vnet-service-endpoint-rule-overview\n\nTo configure private endpoints on your SQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/azure-sql/database/private-endpoint-overview\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.." "```config from cloud.resource where api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equal ignore case ""ACTIVE"" and backendSets.*.backends is empty OR backendSets.*.backends equals ""[]""```","OCI Load Balancer not configured with backend set This policy identifies OCI Load Balancers that have no backend set configured. A backend set is a crucial component of a Load Balancer, comprising a load balancing policy, a health check policy, and a list of backend servers. Without a backend set, the Load Balancer lacks the necessary configuration to distribute incoming traffic and monitor the health of backend servers. As best practice, it is recommended to properly configure the backend set for the Load Balancer to function effectively, distribute incoming data, and maintain the reliability of backend services. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Load Balancers with backend sets, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets_topic-Creating_Backend_Sets.htm#top." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Tcp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```,"Azure overly permissive HTTP(S) access This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = (serverSecurityAlertPolicy.properties.state equal ignore case Disabled) or (serverSecurityAlertPolicy.properties.state equal ignore case Enabled and vulnerabilityAssessments[*].type does not exist)```,"Azure SQL Server ADS Vulnerability Assessment is disabled This policy identifies Azure SQL Server which has ADS Vulnerability Assessment setting disabled. Advanced Data Security - Vulnerability Assessment service scans SQL databases for known security vulnerabilities and highlight deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data. It is recommended to enable ADS - VA service. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on 'Enable Microsoft Defender for SQL' if Azure Defender is not enabled for SQL already\n5. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n6. Ensure that 'MICROSOFT DEFENDER FOR SQL' status is 'ON'\n7. 'Save' your changes." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.accountId) and (Action contains ""s3:Put*"" or Action contains ""s3:Delete*"" or Action equals ""*"" or Action contains ""s3:*"" or Action is member of ('s3:DeleteBucketPolicy','s3:PutBucketAcl','s3:PutBucketPolicy','s3:PutEncryptionConfiguration','s3:PutObjectAcl') ))] exists```","AWS S3 bucket with cross-account access This policy identifies the AWS S3 bucket policy allows one or more of the actions (s3:DeleteBucketPolicy, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutEncryptionConfiguration, s3:PutObjectAcl) for a principal in another AWS account. An S3 bucket policy that defines permissions and conditions for accessing an Amazon S3 bucket and its objects. Granting permissions like s3:DeleteBucketPolicy, s3:PutBucketAcl, s3:PutBucketPolicy, s3:PutEncryptionConfiguration, and s3:PutObjectAcl to other AWS accounts can lead to unauthorized access and potential data breaches. It is recommended to review and remove permissions from the S3 bucket policy by deleting statements that grant access to restricted actions for other AWS accounts. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Choose Permissions, and then choose Bucket Policy.\n5. In the Bucket policy editor text box, do one of the following:\n 5a. Remove the statements that grant access to denied actions to other AWS accounts\n or\n 5b. Remove the permitted denied actions from the statements\n6. Choose Save.." "```config from cloud.resource where cloud.type = 'ibm' and api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id is member of (""crn:v1:bluemix:public:cloud-object-storage::::serviceRole:ObjectReader"",""crn:v1:bluemix:public:cloud-object-storage::::serviceRole:ContentReader"") )] exists and resources[?any( attributes[?any( name equal ignore case ""resourceType"" and value equal ignore case ""bucket"" and operator is member of (""stringEquals"", ""stringMatch"") )] exists )] exists and subjects[?any( attributes[?any( name contains ""access_group_id"" and value contains ""AccessGroupId-PublicAccess"")] exists )] exists as X; config from cloud.resource where api.name = 'ibm-object-storage-bucket' as Y; filter ' $.X.resources[*].attributes[*].value intersects $.Y.name and $.X.resources[*].attributes[*].value intersects $.Y.service_instance_id '; show Y;```","IBM Cloud Object Storage bucket is publicly readable through an access group This policy identifies an IBM Cloud Object Storage bucket that is publicly readable by 'ObjectReader' or 'ContentReader' roles via the public access group. IBM Public Access Group is a predefined group that manages public permissions and access control for resources and services. Assigning an access policy to the public access group with a resource, provides access to the resource to anyone, whether they're a member of your account or not, because authentication is no longer required. With this configuration, you may risk compromising critical data by leaving the IBM Cloud Object Storage public. As a best security practice, avoid adding policies to the public access group to make sure buckets are not publicly accessible. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To remove the public access policy for a bucket,\n\n1. Log in to the IBM Cloud console\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Access groups' in the left panel\n3. Click on the 'Public Access' access group\n4. Click on the three dots in the right corner of a row for the policy that has the reported resource or bucket name in the Resources section\n5. Click on 'Remove' to delete the public access policy in the reported resource\n6. Review the policy details that you're about to remove, and confirm by clicking 'Remove'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = authenticationType equals ""API_KEY"" or additionalAuthenticationProviders[?any( authenticationType equals ""API_KEY"" )] exists```","AWS AppSync GraphQL API is authenticated with API key This policy identifies the AWS AppSync Graphql API using API key for primary or additional authentication methods. AWS AppSync GraphQL API is a fully managed service by Amazon Web Services for building scalable and secure GraphQL APIs. An API key is a hard-coded value in your application generated by the AWS AppSync service when you create an unauthenticated GraphQL endpoint. Using API keys for authentication can pose security risks such as exposure to unauthorized access and limited control over access privileges, potentially compromising sensitive data and system integrity. It is recommended to use authentication methods other than API Keys like IAM, Amazon Cognito User Pools, or OpenID Connect providers for securing AWS AppSync GraphQL APIs, to ensure enhanced security and access control. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Changing the API authorization mode from API key to other methods could cause potential disruptions to existing clients or applications relying on API key authentication. It may require updates to client configurations and authentication workflows for your applications.\n\nTo update the Primary authorization mode option for your AWS AppSync GraphQL API, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Front-end Web & Mobile', select 'AWS AppSync'\n4. Under the 'APIs' section, select the AppSync API that is reported\n5. Navigate to the 'Settings page' from the left panel, Click 'Edit' on the 'Primary authorization mode' section\n6. In the 'Primary authorization mode' window, change the 'Authorization mode' from 'API key' to other authentication methods and configure it according to your business requirements\n7. Click 'Save'\n\nTo update the Additional authorization modes for your AWS AppSync GraphQL API, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Front-end Web & Mobile', select 'AWS AppSync'\n4. Under the 'APIs' section, select the AppSync API that is reported\n5. Navigate to the 'Settings page' from the left panel, and click 'Add' in the 'Additional authorization modes' section.\n6. In the 'Additional authorization mode' window, select any 'Authorization mode' except 'API key' and configure according to your business requirements, and click 'Add'\n7. Navigate to the 'Settings page' from the left panel, select the 'API key' in the 'Authorization mode' column from the 'Additional authorization modes' section, and click 'Delete' to remove the API key authorization mode." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```,"again test perf of AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = 'customerSecretKeys[?any(lifecycleState equals ACTIVE and (_DateTime.ageInDays(timeCreated) > 90))] exists'```,"OCI users customer secret keys have aged more than 90 days without being rotated This policy identifies all of your IAM User customer secret keys which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI customer secret keys access directly or via SDKs or OCI CLI. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Select Identity & Security from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on Customer Secret Keys in the lower left-hand corner of the page.\n6. Delete any Access Keys with a date of 90 days or older under the Created column of\nthe Customer Secret Keys.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = settings[?any( name equals MCAS and properties.enabled is false )] exists ```,"Azure Microsoft Defender for Cloud MCAS integration Disabled This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has Microsoft Defender for Cloud Apps (MCAS) integration disabled. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for MCAS. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Integrations'\n6. Check/Enable option 'Allow Microsoft Defender for Cloud Apps to access my data'\n7. Select 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-fsx-file-system' AND json.rule = FileSystemType equals ""WINDOWS"" and ( WindowsConfiguration.AuditLogConfiguration.FileAccessAuditLogLevel equals ""DISABLED"" AND WindowsConfiguration.AuditLogConfiguration.FileShareAccessAuditLogLevel equals ""DISABLED"")```","AWS FSX Windows filesystem is not configured with file access auditing This policy identifies the AWS FSX Windows filesystem that lacks configuration for FileAccessAuditLogLevel and FileShareAccessAuditLogLevel. Amazon FSx for Windows File Server offers the capability to audit user access to files, folders, and file shares. The settings for FileAccessAuditLogLevel and FileShareAccessAuditLogLevel can be adjusted to record successful access attempts, failed attempts, both, or none, based on your auditing needs. Failing to configure these audit logs may result in unrecognized unauthorized access and potential non-compliance with security standards. It is advisable to set up logging for both file and folder access as well as file share access in alignment with your business needs. This ensures thorough logging, enhances visibility and accountability, supports compliance, and facilitates effective monitoring and incident response. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the file access auditing configuration, Perform the following actions:\n\n1. Sign into the AWS console and Open the Amazon FSx console at https://console.aws.amazon.com/fsx/\n2. Navigate to 'File systems', and choose the Windows file system that is reported\n3. Choose the 'Administration' tab\n4. On the 'File Access Auditing' panel, choose 'Manage'\n5. On the 'Manage file access auditing settings dialog', change the desired settings\n 5a. For 'Log access to files and folders', select the 'Log successful attempts' and/or 'Log failed attempts'\n \n or\n\n 5b. For 'Log access to file shares', select the 'Log successful attempts' and/or 'Log failed attempts'\n6. For 'Choose an audit event log destination', choose 'CloudWatch Logs' or 'Kinesis Data Firehose'. Then choose an existing log or delivery stream or create a new one\n7. Choose 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-mwaa-environment' AND json.rule = EnvironmentClass contains ""foo"" ```","bobby run build informational This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-bedrock-custom-model' as Y; filter ' $.Y.outputDataConfig.bucketName equals $.X.bucketName'; show X;```,"AWS S3 bucket used for storing AWS Bedrock Custom model training artifacts This policy identifies the AWS S3 bucket used for storing AWS Bedrock Custom model training job output. S3 buckets hold the results and artifacts generated from training models in AWS Bedrock. Ensuring proper configuration and access control is crucial to maintaining the security and integrity of the training output. Improperly secured S3 buckets used for storing AWS Bedrock training output can lead to unauthorized access and potential exposure of model information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Bedrock training job output and ensure compliance. NOTE: This policy is designed to identify the S3 buckets utilized for storing results and storing artifacts generated from training custom models in AWS Bedrock. It does not signify any detected misconfiguration or security risk. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the AWS Bedrock Custom model training results data, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.systemUpdatesMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud system updates monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have system updates monitoring set to disabled. It retrieves a daily list of available security and critical updates from Windows Update or Windows Server Update Services. The retrieved list depends on the service that's configured for that virtual machine and recommends that the missing updates be applied. For Linux systems, the policy uses the distro-provided package management system to determine packages that have available updates. It also checks for security and critical updates from Azure Cloud Services virtual machines. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'System updates should be installed on your machines' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." "```config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = _AWSCloudAccount.orgHierarchyNames() does not intersect (""all-accounts"")```","jashah_ms_does_not_intersect This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = lifecycleState equal ignore case ACTIVE and capabilities.canUseConsolePassword is true and isMfaActivated is false```,"OCI MFA is disabled for IAM users This policy identifies Identify Access Management (IAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your OCI user’s identity and complete the sign-in process. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu\n3. Select Users from Identity menu.\n4. Click on each non-complaint user.\n5. Click on Enable Multi-Factor Authentication.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." ```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-model' as Y; filter ' $.Y.artifactUri contains $.X.id '; show X;```,"GCP Storage Bucket storing Vertex AI model This policy identifies publicly exposed GCS buckets that are used to store the GCP Vertex AI model. GCP Vertex AI models (except AutoML Models) are stored in the Storage bucket. Vertex AI model is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per your organization's security and compliance requirements. It is recommended to monitor, identify, and evaluate storage location for GCP Vertex AI model regularly to prevent unauthorized access and AI model thefts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Review and validate the Vertex AI models are stored in the right Storage buckets. Move and/or delete the model and other related artifacts if they are found in an unexpected location. Review how the model was uploaded to an unauthorised/unapproved storage bucket.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_without_asset_type_finding_2 Description-3accdba0-4ab9-4751-8797-ed0c62c25bfb This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(27017,27017) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on MongoDB port (27017) This policy identifies GCP Firewall rules which allow all inbound traffic on MongoDB port (27017). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the MongoDB port (27017) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Azure Database for MySQL server not configured with private endpoint This policy identifies Azure MySQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure MySQL database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for MySQL servers'\n3. Click on the reported MySQL server instance you want to modify \n4. Select 'Networking' under 'Settings' from left panel \n5. Under 'Private endpoint', click on Add private endpoint' to create a add add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/mysql/single-server/how-to-configure-private-link-cli." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-group-settings' and json.rule = values[?any(name equals LockoutDurationInSeconds and (value less than 60 or value does not exist))] exists```,"Azure Microsoft Entra ID account lockout duration less than 60 seconds This policy identifies if the account lockout duration for Microsoft Entra ID (formerly Azure AD) accounts is configured to be less than 60 seconds. The lockout duration determines how long the account remains locked after exceeding the lockout threshold. A lockout duration of less than 60 seconds increases the risk of brute-force or password spray attacks. Malicious actors can exploit a short lockout period to attempt multiple logins more frequently, increasing the likelihood of gaining unauthorized access. Configuring the lockout duration to be at least 60 seconds helps reduce the frequency of repeated login attempts during a brute-force attack, improving protection against such attacks while ensuring a reasonable delay for legitimate users after exceeding the threshold. As a security best practice, it is recommended to configure the account lockout duration to greater than or equal to 60 seconds. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under Manage, select Security\n4. Under Manage, select Authentication methods\n5. Under Manage, select Password protection\n6. Set the 'Lockout duration in seconds' to 60 or higher\n7. Click 'Save'." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.AuthenticationConfiguration.KerberosConfiguration does not exist)' ; show X;```,"AWS EMR cluster is not configured with Kerberos Authentication This policy identifies EMR clusters which are not configured with Kerberos Authentication. Kerberos uses secret-key cryptography to provide strong authentication so that passwords or other credentials aren't sent over the network in an unencrypted format. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. Under the section 'Enable Kerberos authentication' select the check box\n8. Follow below link for configuration steps,\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-kerberos.html\n9. Click on 'Create' button\n10. On the left menu of EMR dashboard Click 'Clusters'\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.." ```config from cloud.resource where api.name = 'azure-container-registry' AND json.rule = (skuName contains Standard or skuName contains Premium) and properties.provisioningState equal ignore case Succeeded and properties.anonymousPullEnabled is false```,"Azure Container Registry with anonymous authentication enabled This policy identifies Azure Container Registries with anonymous authentication enabled, allowing unauthenticated access to the registry. Allowing anonymous pull or access to container registries poses a significant security risk, exposing them to unauthorized users who may retrieve or manipulate container images. To enhance security, disable anonymous access and require authentication through Azure Active Directory (Azure AD). Additionally, local authentication methods such as admin user, repository-scoped access tokens, and anonymous pull should be turned off to ensure authentication relies solely on Azure AD, providing improved control and accountability. As a security best practice, it is recommended to disable anonymous authentication for Azure Container Registries. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently, the Azure UI does not support disabling anonymous authentication for Azure Container Registries. To disable anonymous authentication, refer to the following link:\nhttps://learn.microsoft.com/en-us/azure/container-registry/anonymous-pull-access#disable-anonymous-pull-access." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/write"" as X; count(X) less than 1```","Azure Activity log alert for Create or update network security group rule does not exist This policy identifies the Azure accounts in which activity log alert for Create or update network security group rule does not exist. Creating an activity log alert for Create or update network security group rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Security Rule (Microsoft.Network/networkSecurityGroups/securityRules)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where api.name = 'aws-rds-db-cluster' as X; config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = (((DBParameterGroupFamily starts with ""postgres"" or DBParameterGroupFamily starts with ""aurora-postgresql"") and (['parameters'].['rds.force_ssl'].['ParameterValue'] does not equal 1 or ['parameters'].['rds.force_ssl'].['ParameterValue'] does not exist)) or ((DBParameterGroupFamily starts with ""aurora-mysql"" or DBParameterGroupFamily starts with ""mysql"") and (parameters.require_secure_transport.ParameterValue is not member of (""ON"", ""1"") or parameters.require_secure_transport.ParameterValue does not exist))) as Y; filter '$.X.dBclusterParameterGroupArn equals $.Y.DBClusterParameterGroupArn' ; show X;```","AWS RDS cluster encryption in transit is not configured This policy identifies AWS RDS database clusters that are not configured with encryption in transit. This covers MySQL, PostgreSQL, and Aurora clusters. Enabling encryption is crucial to protect data as it moves through the network and enhances the security between clients and storage servers. Without encryption, sensitive data transmitted between your application and the database is vulnerable to interception by malicious actors. This could lead to unauthorized access, data breaches, and potential compromises of confidential information. It is recommended that data be encrypted while in transit to ensure its security and reduce the risk of unauthorized access or data breaches. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable the in-transit encryption feature for your Amazon RDS cluster, perform the following actions:\nDefault cluster parameter groups for RDS DB clusters cannot be modified. Therefore, you must create a custom parameter group, modify it, and then attach it to your RDS for Cluster. Changes to parameters in a customer-created DB cluster parameter group are applied to all DB clusters that are associated with the DB cluster parameter group.\nFollow the below links to create and associate a DB parameter group with a DB cluster,\nTo Create a DB cluster parameter group, refer to the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBClusterParamGroups.html#USER_WorkingWithParamGroups.CreatingCluster\nTo Modifying parameters in a DB cluster parameter group,\n1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.\n2. In the navigation pane, choose 'Parameter Groups'.\n3. In the list, choose the parameter group that is associated with the reported RDS DB Cluster.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the values of the parameters that you want to modify. You can scroll through the parameters using the arrow keys at the top right of the dialog box.\n6. In the 'Modifiable parameters' section, enter 'rds.force_ssl' in the Filter Parameters search box for PostgreSQL and Aurora PostgreSQL databases, and type 'require_secure_transport' in the search box for MySQL and Aurora MySQL databases.\n a. For the 'rds.force_ssl' database parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature. \n or\n b. For the 'require_secure_transport' parameter, enter '1' for MySQL Databases or 'ON' for Aurora MySQL databases based on allowed values in the Value configuration box to enable the Transport Encryption feature.\n7. Choose Save changes.\n8. Reboot the primary (writer) DB instance in the cluster to apply the changes to it.\n9. Then reboot the reader DB instances to apply the changes to them.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-spanner-database' AND json.rule = state equal ignore case ready AND encryptionConfig.kmsKeyNames does not exist```,"GCP Spanner Databases not encrypted with CMEK This policy identifies GCP Spanner databases that are not encrypted with a Customer-Managed Encryption Key (CMEK). Google Cloud Spanner is a scalable, globally distributed, and strongly consistent database service. By using CMEK with Spanner, you retain complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in certain industries. It is recommended to encrypt Spanner database data using a Customer-Managed Encryption Key (CMEK). This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Encryption configuration can only be updated during spanner database creation. Follow the below steps to create a new spanner database with a customer-managed encryption key:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the Cloud Spanner page\n2. Under instances, select the instance under which the reported database exists\n3. Under databases, select the 'CREATE DATABASE' option\n4. Under the create database page, under the 'SHOW ENCRYPTION OPTIONS' section, select 'Cloud KMS Key'\n5. Select the KMS key you prefer\n6. Click on 'CREATE'.\n\nNote: It is recommended to migrate data from an old database to a new database created.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudformation-describe-stacks' AND json.rule = ""(($.stackResources[?( @.resourceType == 'AWS::EC2::SecurityGroup' || @.resourceType == 'AWS::EC2::SecurityGroupIngress' || @.resourceType == 'AWS::EC2::NetworkAclEntry')].resourceStatus any equal CREATE_COMPLETE) or ($.stackResources[?( @.resourceType == 'AWS::EC2::SecurityGroup' || @.resourceType == 'AWS::EC2::SecurityGroupIngress' || @.resourceType == 'AWS::EC2::NetworkAclEntry')].resourceStatus any equal UPDATE_COMPLETE)) and (($.cloudFormationTemplate.Resources.{}.SecurityGroupIngress[*].CidrIp any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.SecurityGroupIngress[*].CidrIpv6 any equal ::/0 or $.cloudFormationTemplate.Resources.{}.Properties.CidrIp any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.Properties.CidrIpv6 any equal ::/0) or ($.cloudFormationTemplate.Resources.{}.Properties.CidrBlock any equal 0.0.0.0/0 or $.cloudFormationTemplate.Resources.{}.Properties.Ipv6CidrBlock any equal ::/0 or $.cloudFormationTemplate.Resources.{}.Properties.Protocol any equal -1))""```","AWS CloudFormation template contains globally open resources This alert triggers if a CloudFormation template that when launched will result in resources allowing global network access. Below are three common causes: - Security Group with a {0.0.0.0/0, ::/0} rule - Network Access Control List with a {0.0.0.0/0, ::/0} rule - Network Access Control List with -1 IpProtocol This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Prisma Cloud encourages you to review the template and ensure this is the intended behavior.\n\n1. Goto the AWS CloudFormation dashboard.\n2. Click on the Stack you want to modify.\n3. Select the Template tab and then View in Designer.\n4. Make your template modifications.\n5. Check for syntax errors in your template by choosing Validate template near the top of the page.\n6. Select Save from the file (icon) menu.\n7. Choose Amazon S3 bucket, name your template and Save.\n8. Copy the bucket URL and click OK.\n9. Select Close to close Designer. \n10. Click on the Stack you want to modify.\n11. From the Actions pull down menu, select Update stack\n12. Choose Replace current template and paste the URL from Designer into the Amazon S3 URL field. Then click on Next.\n13. Specify stack details, then click on Next.\n14. Configure stack options, then click on Next.\n15. Review, then select Update stack near the bottom of the page.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Authorization/policyAssignments/write"" as X; count(X) less than 1```","Azure Activity log alert for Create policy assignment does not exist This policy identifies the Azure accounts in which activity log alert for Create policy assignment does not exist. Creating an activity log alert for Create policy assignment gives insight into changes done in azure policy - assignments and may reduce the time it takes to detect unsolicited changes. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create policy assignment (Microsoft.Authorization/policyAssignments)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = enabled is true and origins.items[*] contains customOriginConfig and origins.items[?any(customOriginConfig.originProtocolPolicy does not contain https-only and ( domainName contains "".data.mediastore."" or domainName contains "".mediapackage."" or domainName contains "".elb."" ))] exists```","AWS CloudFront origin protocol policy does not enforce HTTPS-only This policy identifies AWS CloudFront which has an origin protocol policy that does not enforce HTTPS-only. Enforcing HTTPS protocol policy between origin and CloudFront will encrypt all communication and will be more secure. As a security best practice, enforce HTTPS-only traffic between a CloudFront distribution and the origin. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Communication between CloudFront and your Custom Origin should enforce HTTPS-only traffic. Modify the CloudFront Origin's Origin Protocol Policy to HTTPS only.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Origins' tab.\n4. Check the origin you want to modify then select Edit.\n5. Change the Origin Protocol Policy to 'https-only.'\n6. Select 'Yes, Edit.'." "```config from cloud.resource where api.name = 'oci-networking-loadbalancer' AND json.rule = listeners.*.protocol equals HTTP and lifecycleState equals ACTIVE and isPrivate is false as X; config from cloud.resource where api.name = 'oci-loadbalancer-waf' AND json.rule = lifecycleState equal ignore case ACTIVE and (webAppFirewallPolicyId exists and webAppFirewallPolicyId does not equal ""null"") as Y; filter 'not ($.X.id equals $.Y.loadBalancerId) '; show X;```","OCI Load balancer not configured with Web application firewall (WAF) This policy identifies OCI Load balancers that are not configured with a Web application firewall (WAF). A Web Application Firewall (WAF) helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. Without WAF, load balancers are vulnerable to various web-based attacks, including SQL injection, cross-site scripting (XSS), and other common exploits. This can lead to unauthorized access, data breaches, and other security incidents. As a best practice, it is recommended to configure Web Application Firewall (WAF) for OCI Load Balancers to enhance security. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure an OCI Load Balancer with a Web Application Firewall (WAF), refer to the following documentation:\nhttps://docs.oracle.com/en/learn/oci-waf-flex-lbaas/index.html#introduction." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-describe-cluster' AND json.rule = status.state does not contain TERMINATING and terminationProtected is false```,"AWS EMR cluster is not enabled with termination protection This policy identifies the AWS EMR Cluster that is not enabled with termination protection. Termination protection serves as a safeguard against unintentional termination of your clusters. When this feature is enabled, any efforts to terminate the cluster via the AWS Management Console, CLI, or API will be prevented unless the protection is deliberately disabled beforehand. This feature is particularly beneficial for long-running or essential clusters, as accidental termination could lead to data loss or considerable downtime. It is advisable to activate termination protection on AWS EMR clusters to prevent accidental terminations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To turn termination protection on for the AWS EMR cluster with the console, Perform the following actions:\n\n1. Sign in to the AWS Management Console, and open the Amazon EMR console at https://console.aws.amazon.com/emr\n2. Under EMR on EC2 in the left navigation pane, choose 'Clusters'\n3. Click on the cluster that is reported\n4. On the 'Properties' tab on the cluster details page, Under 'Cluster termination and node replacement' section click 'Edit'\n5. Select to use 'Termination protection' check box to turn the feature on or off\n6. Select 'Save changes' to confirm." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case Ready and firewallRules[*] is empty and properties.network.publicNetworkAccess equal ignore case Enabled```,"Azure Database for MySQL flexible server public network access setting is enabled This policy identifies Azure Database for MySQL flexible servers which have public network access setting enabled. Publicly accessible MySQL servers are vulnerable to external threats with risk of unauthorized access or may remotely exploit any vulnerabilities. As a best security practice, it is recommended to configure the MySQL servers with IP-based strict server-level firewall rules or virtual-network rules or private endpoints so that servers are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure IP-based strict server-level firewall rules on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-manage-firewall-portal\n\nTo configure virtual-network rules on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-manage-virtual-network-portal\n\nTo configure private endpoints on your MySQL server, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/mysql/flexible-server/how-to-networking-private-link-portal\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.." ```config from cloud.resource where resource.status = Active AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState exists```,"OCI Hosts test - Ali This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(25,25) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on SMTP port (25) This policy identifies GCP Firewall rules which allow all inbound traffic on SMTP port (25). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SMTP port (25) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = secrets[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is true```,"Azure Key Vault secret has no expiration date (RBAC Key vault) This policy identifies Azure Key Vault secrets that do not have an expiry date for the RBAC Key vaults. As a best practice, set an expiration date for each secret and rotate the secret regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].id' | xargs -I {} az role assignment create --assignee """" --role ""Key Vault Reader"" --scope {} This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Key vaults'.\n3. Select the Key vault instance where the secrets are stored.\n4. Select 'Secrets', and select the secret that you need to modify.\n5. Select the current version.\n6. Set the expiration date.\n7. 'Save' your changes.." ```config from cloud.resource where api.name = 'aws-code-build-project' AND json.rule = environment.privilegedMode exists and environment.privilegedMode is true```,"AWS CodeBuild project environment privileged mode is enabled This policy identifies the CodeBuild projects where the privileged mode is enabled. Privileged mode grants unrestricted access to all devices and runs the Docker daemon inside the container. It is recommended to enable this mode only for building Docker images. It recommended disabling the privileged mode to prevent unintended access to Docker APIs and container hardware, reducing the risk of potential tampering or critical resource deletion. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable the Privileged mode for the CodeBuild project:\n\n1. Log in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Navigate to 'Developer Tools' from the 'Services' dropdown and select the 'CodeBuild'.\n4. In the navigation pane, choose 'Build projects'.\n5. Select the reported Build project and choose Edit, then click ‘Environment'.\n6. On the Edit Environment page, expand the configuration by clicking the 'Override image' button.\n7. Uncheck the checkbox 'Enable this flag if you want to build Docker images or want your builds to get elevated privileges.' under the 'Privileged' section.\n8. When you have finished changing your CodeBuild environment configuration, click ‘Update environment’.." ```config from cloud.resource where api.name='gcloud-sql-instances-list' AND json.rule='$.settings.backupConfiguration.binaryLogEnabled is false and $.databaseVersion contains MYSQL'```,"GCP SQL MySQL DB instance point-in-time recovery backup (Binary logs) is not enabled This policy identifies Cloud SQL MySQL DB instances whose point-in-time recovery backup is not enabled. In case of an error, point-in-time recovery helps you recover an instance to a specific point in time. It is recommended to enable automated backups with point-in-time recovery to prevent any data loss in case of an unwanted scenario. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable point-in-time recovery backup (Binary logs) for the reported MySQL instance:\n\nhttps://cloud.google.com/sql/docs/mysql/backup-recovery/pitr." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-elastic-address' AND json.rule = associationId does not exist```,"AWS Elastic IP not in use This policy identifies unused Elastic IP (EIP) addresses in your AWS account. Any Elastic IP in your AWS account is adding charges to your monthly bill, although it is not associated with any resources. As a best practice, it is recommended to associate/remove Elastic IPs that are not associated with any resources, it will also help you avoid unexpected charges on your bill. For more details: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-eips-associating This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the VPC dashboard \n4. Go to 'Elastic IPs', from the left panel\n5. Select the reported Elastic IP\n- If Elastic IP is not required; release IP by selecting 'Release Elastic IP address' from the 'Actions' dropdown.\n- If Elastic IP is required; associate IP by selecting 'Associate Elastic IP address' from the 'Actions' dropdown.." "```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = metadataOptions.httpEndpoint exists and metadataOptions.httpEndpoint equals ""enabled"" and metadataOptions.httpPutResponseHopLimit greater than 1 as X; config from cloud.resource where api.name = 'aws-describe-auto-scaling-groups' as Y; filter ' $.X.launchConfigurationName equal ignore case $.Y.launchConfigurationName'; show X;```","AWS Auto Scaling group launch configuration configured with Instance Metadata Service hop count greater than 1 This policy identifies the autoscaling group launch configuration where the Instance Metadata Service network hops count is set to greater than 1. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. With the metadata response hop limit count for the IMDS greater than 1, the PUT response that contains the secret token can travel outside the EC2 instance. Only metadata with a limited hop count for all your EC2 instances is recommended. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot modify a launch configuration after you create it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration with IMDS with a hop count equal to 1.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration, choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the 'Advanced details', go to the 'Metadata response hop limit' section.\n6. Edit the text box and set the value to 1.\n7. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n8. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n9. Select the check box next to the Auto Scaling group.\n10. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n11. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n12. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n13. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances, \n\n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from the following URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration, choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains AuthorizeSecurityGroupIngress and $.X.filterPattern contains AuthorizeSecurityGroupEgress and $.X.filterPattern contains RevokeSecurityGroupIngress and $.X.filterPattern contains RevokeSecurityGroupEgress and $.X.filterPattern contains CreateSecurityGroup and $.X.filterPattern contains DeleteSecurityGroup) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Security group changes are not monitored This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_parser_stats or settings.databaseFlags[?any(name contains log_parser_stats and value contains on)] exists)""```","GCP PostgreSQL instance database flag log_parser_stats is not set to off This policy identifies PostgreSQL database instances in which database flag log_parser_stats is not set to off. The PostgreSQL planner/optimizer is responsible to parse and verify the syntax of each query received by the server. The log_parser_stats flag enables a crude profiling method for logging parser performance statistics. Even though it can be useful for troubleshooting, it may increase the number of logs significantly and have performance overhead. It is recommended to set log_parser_stats as off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_parser_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_parser_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and config.http20Enabled is false```,"Azure Logic App does not utilize HTTP 2.0 version This policy identifies Azure Logic apps that are not utilizing HTTP 2.0 version. Azure Logic app using HTTP 1.0 for its connection is considered not secure as HTTP 2.0 version has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritisation of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming. As a security best practice, it is recommended to configure HTTP 2.0 version for Logic apps connections. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Set 'HTTP version' to '2.0'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_system_policy_as_child_policies_ss_finding_1 Description-d2b8d109-2e3d-4743-8da0-41e105b5cecc This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (copyTagsToSnapshot is false or copyTagsToSnapshot does not exist) and engine does not contain aurora and engine does not contain docdb and engine does not contain neptune```,"AWS RDS instance with copy tags to snapshots disabled This policy identifies RDS instances that have copy tags to snapshots disabled. Copy tags to snapshots copies all the user-defined tags from the DB instance to snapshots. Copying tags allow you to add metadata and apply access policies to your Amazon RDS resources. NOTE: Setting Copy tags to snapshots for an Aurora DB instance has no effect on the DB setting. So Aurora DB instances are excluded from policy check. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose DB Instances, and then select the reported DB instance\n5. Click on 'Modify'\n6. In 'Additional Configuration' section, In 'Backup' sub-section select the 'Copy tags to snapshots'\n7. Click on 'Continue'\n8. On the 'Summary of Modifications' panel, review the configuration changes. From 'Scheduling of Modifications' section, select whether changes to 'Apply immediately' or 'Apply during the next scheduled maintenance window'.\n9. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code contains active and listeners[?any(protocol equals HTTP and defaultActions[?any(type equals redirect and redirectConfig.protocol equals HTTPS)] does not exist )] exists```,"AWS Elastic Load Balancer v2 (ELBv2) listener that allow connection requests over HTTP This policy identifies Elastic Load Balancers v2 (ELBv2) listener that are configured to accept connection requests over HTTP instead of HTTPS. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the application load balancer. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. Click on 'Listeners' tab\n7.'Edit' the 'Listener ID' rule that uses HTTP\n8. Select 'HTTPS' in the 'Protocol : port' section, Choose appropriate Default action, Security policy and Default SSL certificate parameters as per your requirement.\n9. Click on 'Update'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case Ready and properties.network.publicNetworkAccess equal ignore case Enabled and firewallRules[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists```,"Azure Database for MySQL flexible server firewall rule allow access to all IPv4 address This policy identifies Azure Database for MySQL flexible servers which have firewall rule allowing access to all IPV4 address. MySQL server having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 (i.e. all IPv4 addresses) would allow access to server from any host on the internet. Allowing access to all IPv4 addresses expands the potential attack surface and exposes the MySQL server to increased threats. As a best security practice, it is recommended to configure the MySQL servers with restricted IP-based server-level firewall rules so that servers are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to Azure Database for MySQL flexible servers dashboard\n3. Click on reported MySQL server\n4. Under 'Settings', click on 'Networking'.\n5. Under 'Firewall rules' section, delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255. Add specific IPs as per your business requirement.\n6. Click on 'Save'\n\nNOTE: These settings take effect immediately after they're applied. You might experience connection loss if you don't meet the requirements for each setting.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case normal and features.pullSecretApplied is false```,"IBM Cloud Kubernetes cluster has Image pull secrets disabled This policy identifies IBM Cloud Kubernetes Clusters with Image pull secrets disabled. If Image pull secrets feature Is disabled, it stores registry credentials to connect to container registry. It is recommended to enable image pull secrets feature for proper protection of personal information This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To enable image pull secrets feature on a Kubernetes cluster, refer following URLs:\nhttps://cloud.ibm.com/docs/containers?topic=containers-registry#imagePullSecret_migrate_api_key\nhttps://cloud.ibm.com/docs/containers?topic=containers-registry#update-pull-secret." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-rds-db-cluster' AND json.rule = status contains available and (engine contains postgres or engine contains mysql) and iamdatabaseAuthenticationEnabled is false```,"AWS RDS cluster not configured with IAM authentication This policy identifies RDS clusters that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication provides the network traffic to and from database clusters is encrypted using Secure Sockets Layer (SSL), Centrally manage access to your database resources and Profile credentials instead of a password, for greater security. For details: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable IAM authentication on your RDS cluster follow the below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Enabling.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded and properties.privateEndpointConnections[*] does not exist```,"Azure Cosmos DB Private Endpoint Connection is not configured This policy identifies Cosmos DBs that are not configured with a private endpoint connection. Azure Cosmos DB private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Cosmos account from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure Private Endpoint Connection to Cosmos DB. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following URL to configure Private endpoints on your Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-user' AND json.rule = userType equals Guest```,"Azure Active Directory Guest users found This policy identifies Azure Active Directory Guest users. Azure Active Directory allows B2B collaboration which lets you invite people from outside your organisation to be guest users in your cloud account. Avoid creating guest user in your cloud account unless you have business need. Guest users are usually added for users outside your employee on-boarding/off-boarding process and could potentially be overlooked leading to a potential vulnerability. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to 'Azure Active Directory' (Left Panel)\n3. Click on 'Users' under 'Manage'\n4. Search for reported user in search pane\n5. Select on check box for the reported user\n6. Click on 'Delete user' in top pane\n7. Select 'OK' to confirm\n\nNote: Verify impact caused by deleting Guest user." "```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = name does not start with ""gke-"" and status equals RUNNING as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' as Y; filter '($.X.serviceAccounts[*].email equals $.Y.user) and not($.Y.roles[*] contains projects or $.Y.roles[*] all equal roles/viewer)'; show X;```","GCP VM instances with excessive service account permissions This policy identifies VM instances with service account which have excessive permissions other than viewer/reader access. It is recommended that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job. In practice, this means you should configure service accounts for your instances with the following process: - Create a new service account rather than using the Compute Engine default service account. - Grant IAM roles to that service account for only the resources that it needs. - Configure the instance to run as that service account. - Configure VM instance least permissive service account with only viewer/reader role until it is necessary to have more access. Avoid granting more access than necessary and regularly check your service account permissions to make sure they are up-to-date. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Note: To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance.\n\nTo change service account of the stopped instance:\n1. Login to GCP portal \n2. Go to Compute Engine\n3. Choose VM instances\n4. Click on the reported VM instance for which you want to change the service account\n5. If the instance is not stopped, click the Stop button. Wait for the instance to be stopped\n6. Next, click the Edit button\n7. Scroll down to the Service Account section, From the drop-down menu, select the desired service account\nNote: To fix this alert either you have to associate service account which has only viewer access or if VM has desired service account and access then dismiss the alert for particular VM instance.\n8. Click the Save button." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_ss_update_child_policy_finding_1 Description-81f1240b-8ec0-4626-86af-79a0b93913f4 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-queue-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```,"Azure Storage account diagnostic setting for queue is disabled This policy identifies Azure Storage account queues that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account queues. These logs provide valuable insights into the operations, performance, and security of the storage account queues. As a best practice, it is recommended to enable diagnostic logs on all storage account queues. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the queue resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isSpecialCharactersRequired isFalse'```,"OCI IAM password policy for local (non-federated) users does not have a symbol This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a symbol in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 SPECIAL CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." "```config from cloud.resource where api.name = 'gcloud-compute-external-backend-service' AND json.rule = iap does not exist or iap.enabled equals ""false""```","GCP Identity-Aware Proxy (IAP) not enabled for External HTTP(s) Load Balancer This policy identifies GCP External HTTP(s) Load Balancers for which Identity-Aware Proxy(IAP) is disabled. IAP is used to enforce access control policies for applications and resources. It works with signed headers or the App Engine standard environment Users API to secure connections to External HTTP(s) Load Balancers. It is recommended to enable Identity-Aware Proxy for securing the External HTTP(s) Load Balancers. Reference: https://cloud.google.com/iap/docs/concepts-overview This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable IAP on the external HTTP(S) load balancer:\n\nhttps://cloud.google.com/iap/docs/load-balancer-howto#enable-iap." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = ""not ( diagnosticSettings.value[*].properties.logs[*].enabled any equal true and diagnosticSettings.value[*].properties.logs[*].enabled size greater than 0 )""```","Azure Key Vault audit logging is disabled This policy identifies Azure Key Vault instances for which audit logging is disabled. As a best practice, enable audit event logging for Key Vault instances to monitor how and when your key vaults are accessed, and by whom. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Select 'Key vaults'\n3. Select the key vault instance to modify\n4. Select 'Diagnostic settings' under 'Monitoring'\n5. Click on '+Add diagnostic setting'\n6. Specify a 'Diagnostic settings name',\n7. Under 'Category details' section, Under Log, select 'AuditEvent'\n8. Under section 'Destination details',\na. If you select 'Send to Log Analytics workspace', set the 'Subscription' and 'Log Analytics workspace'\nb. If you select 'Archive to storage account', set the 'Subscription', 'Storage account' and 'Retention (days)'\nc. If you select 'Stream to an event hub', set the 'Subscription', 'Event hub namespace', 'Event hub name' and 'Event hub policy name'\nd. If you select 'Send to partner solution', set the 'Subscription' and 'Destination'\n9. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-aiplatform-endpoint' AND json.rule = encryptionSpec.kmsKeyName does not exist```,"GCP Vertex AI Endpoint not encrypted with CMEK This policy identifies GCP Vertex AI Endpoints that are not encrypted with CMEK. Customer Managed Encryption Keys (CMEK) for a Vertex AI Endpoint provide control over the encryption of data at rest. Encrypting GCP Vertex AI Endpoints with CMEK enhances security by giving you full control over encryption keys. This ensures data protection, especially for sensitive models and predictions. CMEK allows key rotation and revocation, aligning with compliance requirements and offering better data privacy management. It is recommended to use CMEK for Vertex AI Endpoint encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Vertex AI Endpoint encryption cannot be changed after creation. To make use of CMEK a new Endpoint can be created.\n\nTo create a new Vertex AI Endpoint, please follow the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'DEPLOY AND USE', go to 'Online prediction'\n4. Select 'ENDPOINTS' tab\n5. Click 'CREATE'\n6. Configure the endpoint name and access as required\n7. Click on 'ADVANCED OPTIONS', and then select 'Cloud KMS key'\n8. Select the appropriate 'Key type' and then select the required CMEK\n9. Click 'CONTINUE'\n10. Configure the Model settings as required, click 'CONTINUE'\n11. Configure the Model monitoring as required, click 'CONTINUE'\n12. Click 'CREATE'\n\nTo delete an existing Vertex AI Endpoint, please follow the steps below:\n\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'DEPLOY AND USE', go to 'Online prediction'\n4. Select 'ENDPOINTS' tab\n5. Click on the alerting endpoint\n6. Click on 'View More' bottom (three dots) for any model from the list.\n7. Click 'Undeploy model from endpoint'\n8. In the Undeploy model from endpoint dialog, click 'Undeploy'\n9. Repeat step 6-8 for all models listed\n10. Go back to 'Online prediction' page\n11. Select the alerting endpoint checkbox\n12. Click 'DELETE'." "```config from cloud.resource where api.name = 'aws-glue-job' as X; config from cloud.resource where api.name = 'aws-glue-security-configuration' as Y; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Z; filter '$.X.SecurityConfiguration does not exist or ( $.X.SecurityConfiguration equals $.Y.name and ($.Y.encryptionConfiguration.s3Encryption[*].s3EncryptionMode does not equal ""SSE-KMS"" or ($.Y.encryptionConfiguration.s3Encryption[*].kmsKeyArn exists and $.Y.encryptionConfiguration.s3Encryption[*].kmsKeyArn equals $.Z.keyMetadata.arn)))' ; show X;```","AWS Glue Job not encrypted by Customer Managed Key (CMK) This policy identifies AWS Glue jobs that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using the CMK that is disabled. AWS Glue allows you to specify whether the data processed by the job should be encrypted when stored in data storage locations such as Amazon S3. To protect sensitive data from unauthorized access, users can specify CMK to get enhanced security, and control over the encryption key and also comply with any regulatory requirements. It is recommended to use a CMK to encrypt the AWS Glue job data as it provides complete control over the encrypted data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To encrypt data processed by AWS Glue jobs, configure encryption settings within the security configuration of the Glue job. Security configurations cannot be edited from the console, so we need to create a new security configuration with the necessary settings and attach it to the existing Glue job.\n\nTo add a security configuration using the AWS Glue console,\n\n1. Sign in to the AWS Management Console: Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the ""Find Services"" search box, type ""Glue"" and select ""AWS Glue"" from the search results.\n4. To add a security configuration using the AWS Glue console, choose 'Security Configurations' in the navigation pane.\n5. Choose 'Add security configuration'.\n6. on the Security configuration properties, Enter a unique security configuration name in the name text box.\n7. To Enable S3 encryption, select the checkbox under the 'Enable S3 encryption' section.\n8. Select the 'SSE-KMS' option in the 'Encryption mode' and choose an AWS KMS CMK key, or choose Enter a key ARN of the CMK and provide the ARN for the key that you are managing according to your business requirements.\n9. Click 'Create' to create a security configuration.\n\n\nTo add a security configuration to the existing glue job.\n\n1. Sign in to the AWS Management Console: Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the ""Find Services"" search box, type ""Glue"" and select ""AWS Glue"" from the search results.\n4. Choose the ETL jobs in the navigation pane.\n5. select the reported job under the Your Jobs section.\n6. select the Job details tab.\n7. select the newly created security configuration from the dropdown in the 'Security configuration' section under the 'Advance properties' dropdown.\n8. Click 'Save'.\n\nTo enable the KMS CMK key, please refer to the below link.\nhttps://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html#enabling-keys-console." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-compute' AND json.rule = properties.properties.state equal ignore case running and (properties.computeType equal ignore case ComputeInstance or properties.computeType equal ignore case AmlCompute ) and properties.disableLocalAuth is false```,"Azure Machine Learning compute instance with local authentication enabled This policy identifies Azure Machine Learning compute instances that are using local authentication. Disabling local authentication improves security by mandating the use of Microsoft Entra ID for authentication. Local authentication can lead to security risks and unauthorized access. Using Microsoft Entra ID ensures a more secure and compliant authentication process. As a security best practice, it is recommended to disable local authentication and use Microsoft Entra ID for authentication. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Disabling local authentication on an existing Azure Machine Learning compute instance without deleting and recreating it is not supported. The recommended approach to secure your compute instance is to set it up without local authentication from the beginning.\n\nTo create a new compute instance without local authentication:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Manage' section, click on the 'Compute'\n7. Click 'New' to create a new compute instance\n8. In the 'Security' tab, under the 'Enable SSH access' section, leave the option disabled to turn off local authentication\n9. Select 'Review + Create' to create the compute instance." "```config from cloud.resource where api.name = 'oci-networking-networkloadbalancer' AND json.rule = lifecycleState equal ignore case ""ACTIVE"" and backendSets.*.backends is empty OR backendSets.*.backends equals ""[]""```","OCI Network Load Balancer not configured with backend set This policy identifies OCI Network Load Balancers that have no backend set configured. A backend set is a crucial component of a Network Load Balancer, comprising a load balancing policy, a health check policy, and a list of backend servers. Without a backend set, the Network Load Balancer lacks the necessary configuration to distribute incoming traffic and monitor the health of backend servers. As best practice, it is recommended to properly configure the backend set for the Network Load Balancer to function effectively, distribute incoming data, and maintain the reliability of backend services. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Network Load Balancers with backend sets, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets_topic-Creating_Backend_Sets.htm#top." ```config from cloud.resource where api.name = 'aws-redshift-describe-clusters' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.encrypted is true and $.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;```,"AWS Redshift Cluster not encrypted using Customer Managed Key This policy identifies Redshift Clusters which are encrypted with default KMS keys and not with Keys managed by Customer. It is a best practice to use customer managed KMS Keys to encrypt your Redshift databases data. Customer-managed CMKs give you more flexibility, including the ability to create, rotate, disable, define access control for, and audit the encryption keys used to help protect your data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption with Customer Managed Key on your Redshift cluster follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html." ```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = logging does not exist```,"GCP Storage Bucket does not have Access and Storage Logging enabled This policy identifies storage buckets that do not have Access and Storage Logging enabled. By enabling access and storage logs on target Storage buckets, it is possible to capture all events which may affect objects within target buckets. It is recommended that storage Access Logs and Storage logs are enabled for every Storage Bucket. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the steps mentioned in the below link to enable Access and Storage logs using GSUTIL or JSON API.\nReference : https://cloud.google.com/storage/docs/access-logs." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status contains PENDING_VALIDATION'```,"AWS Certificate Manager (ACM) contains certificate pending validation This policy identifies invalid certificates which are in AWS Certificate Manager. When your Amazon ACM certificates are not validated within 72 hours after the request is made, those certificates become invalid and you will have to request new certificates, which could cause interruption to your applications or services. Though AWS Certificate Manager automatically renews certificates issued by the service that is used with other AWS resources. However, the ACM service does not automatically renew certificates that are not currently in use or not associated anymore with other AWS resources. So the renewal process including validation must be done manually before these certificates become invalid. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To validate Certificates: \n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Validate your certificate for your domain using either Email or DNS validation, depending upon your certificate validation method.\n\nOR\n\nIf the certificate is not required you can delete that certificate. To delete invalid Certificates:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'\n\nNote: This alert will get auto-resolved, as the certificate becomes invalid in 72 hours. It is recommended to either delete or validate the certificate within the timeframe.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'accessKeys[*] size > 0 and accessKeys[*].status any equal Active and loginProfile[*] is not empty'```,"Alibaba Cloud RAM user has both console access and access keys This policy identifies Resource Access Management (RAM) users who have both console access and access keys. When a RAM user is created, the Administrator can assign either console access or access keys or both. As a best practice, it is recommended to assign console access to users and access keys for system / API applications, but not both to the same RAM user. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Click on reported user\n5. Based on the requirement and company policy, either delete the access keys or Remove Logon Settings for the reported RAM user.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""secrets-manager"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Secrets Manager service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for the Secrets Manager service. A Service ID with admin access will be able to perform all platform tasks for Secrets Manager, including the creation, modification, and deletion of Secrets Manager service instances, as well as the assignment of access policies to other users. On Secret Manager, there is a chance that sensitive data might be exposed in the underlying service if a Service ID with administrative rights is compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'Secrets Manager' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Sql/servers/firewallRules/delete"" as X; count(X) less than 1```","Azure Activity log alert for Delete SQL server firewall rule does not exist This policy identifies the Azure accounts in which activity log alert for Delete SQL server firewall rule does not exist. Creating an activity log alert for Delete SQL server firewall rule gives insight into SQL server firewall rule access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete server firewall rule (Microsoft.Sql/servers/firewallRules)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( ( remote.cidr_block equals ""0.0.0.0/0"" or remote.name equals $.name ) and direction equals ""inbound"" )] exists as X; config from cloud.resource where api.name = 'ibm-vpc' as Y; filter ' $.X.id equals $.Y.default_security_group.id '; show X;```","IBM Cloud Default Security Group allow ingress rule from 0.0.0.0/0 This policy identifies IBM Cloud Default Security Groups which has ingress rules that allow traffic from 0.0.0.0/0. A VPC comes with a default security group whose initial configuration allows access from all members that are attached to this security group. If you do not specify a security group when you launch a Virtual Server, the Virtual Server is automatically assigned to this default security group. As a result, the Virtual Server will be having risk of uncontrolled connectivity. It is recommended that Default Security Group allows network ports, protocols, and services listening on a system with validated business needs that are running on each system. This is applicable to ibm cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' or 'Source' as Security Groups name\n6. Click on 'Delete'." "```config from cloud.resource where cloud.account = 'Bikram-Personal-AWS Account' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = versioningConfiguration.status contains ""Off"" ```","bikram-test-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.enablePurgeProtection is false```,"Azure Key Vault Purge protection is not enabled This policy identifies Azure Key Vault which has Purge protection disabled. Enabling Azure Key Vault Purge protection feature prevents malicious deletion of a key vault which can lead to permanent data loss. It is recommended to enable Purge protection for Azure Key Vault which protects by enforcing a mandatory retention period for soft deleted key vaults. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Under 'Settings' select 'Properties'\n4. For 'Purge protection' click on 'Enable Purge protection (enforce a mandatory retention period for deleted vaults and vault objects)'\n5. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter ""($.Y.bucketName==$.X.s3BucketName) and ($.Y.acl.grants[*].grantee contains AllUsers or $.Y.acl.grants[*].permission contains FullControl) and ($.Y.policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Action contains s3:* or $.Y.policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Action contains s3:*)"" ; show Y;```","AWS S3 Bucket Policy allows public access to CloudTrail logs This policy scans your bucket policy that is applied to the S3 bucket to prevent public access to the CloudTrail logs. CloudTrail logs a record of every API call made in your AWS account. These logs file are stored in an S3 bucket. Bucket policy or the access control list (ACL) applied to the S3 bucket does not prevent public access to the CloudTrail logs. It is recommended that the bucket policy or access control list (ACL) applied to the S3 bucket that stores CloudTrail logs prevents public access. Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Goto S3\n3. Choose the reported S3 bucket and click Properties\n4. In the Properties pane, click the Permissions tab.\n5. If the Edit bucket policy button is present, select it.\n6. Remove any statement having an effect Set to 'Allow' and a principal set to '*'.\nNote: We recommend that you do not configure CloudTrail to write into an S3 bucket that resides in a different AWS account.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_add_remove_child_policy_hyperion_policy_ss_finding_1 Description-e12f27fd-c82b-4362-8105-60994fe17eec This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'oci-block-storage-boot-volume' AND json.rule = lifecycleState equal ignore case ""AVAILABLE"" AND kmsKeyId is member of (""null"")```","OCI boot volume is not encrypted with Customer Managed Key (CMK) This policy identifies OCI boot volumes that are not encrypted with a Customer Managed Key (CMK). Encrypting boot volumes with a CMK enhances data security by providing an additional layer of protection. Effective management of encryption keys is crucial for safeguarding and accessing sensitive data. Customers should review boot volumes encrypted with Oracle service managed keys to determine if they prefer managing keys for specific volumes and implement their own key lifecycle management accordingly. As best practice, it is recommended to encrypt OCI boot volumes using a Customer Managed Key (CMK) to strengthen data security measures. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the OCI Console.\n2. Switch to the Region of the reported resource from the Region drop-down in top-right corner.\n3. Type the reported boot volume name into the Search box at the top of the Console.\n4. Click on the reported boot volume from the search results.\n5. Next to ""Encryption Key"", click on ""Assign"".\n6. Choose the Vault Compartment, Vault, Master Encryption Key Compartment and Master Encryption Key.\n7. Click Assign.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals ""0.0.0.0"" and endIpAddress equals ""0.0.0.0"")] exists```","Copy of Azure SQL Server allow access to any Azure internal resources This policy identifies SQL Servers that are configured to allow access to any Azure internal resources. Firewall settings with start IP and end IP both with ‘0.0.0.0’ represents access to all Azure internal network. When this settings is enabled, SQL server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n3. Click on the reported SQL server\n4. Click on 'Networking' under Security\n5. Unselect 'Allow Azure services and resources to access this server' under Exceptions if selected.\n6. Remove any firewall rule which allows access to 0.0.0.0 in startIpAddress and endIpAddress if any.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = logging.clusterLogging[*].types[*] all empty or logging.clusterLogging[*].enabled is false```,"AWS EKS control plane logging disabled Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account. These logs make it easy for you to secure and run your clusters. You can select the exact log types you need, and logs are sent as log streams to a group for each Amazon EKS cluster in CloudWatch. This policy generates an alert if control plane logging is disabled. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable control plane logs:\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Logging, choose 'Manage logging'\n5. For each individual log type, choose Enabled\n6. Click on 'Save changes'." "```config from cloud.resource where api.name = 'aws-lambda-list-functions' as X; config from cloud.resource where api.name = 'aws-iam-list-roles' as Y; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action equals ""*"" or Action contains :* or Action[*] contains :*) and (Resource equals ""*"" or Resource[*] anyStartWith ""*"") and Condition does not exist)] exists as Z; filter '$.X.role equals $.Y.role.arn and $.Y.attachedPolicies[*].policyName equals $.Z.policyName'; show Z;```","AWS IAM policy attached to AWS Lambda execution role is overly permissive This policy identifies Lambda Functions execution role having overly permissive IAM policy attached to it. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Refer to the following URL to give fine-grained and restrictive permissions to IAM Policy:\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-managed-policy-console." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isNumericCharactersRequired isFalse'```,"OCI IAM password policy for local (non-federated) users does not have a number This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a number in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 NUMERIC CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-rds-describe-db-instances' AND json.rule = 'publiclyAccessible is true'```,"AWS RDS database instance is publicly accessible This policy identifies RDS database instances which are publicly accessible. DB instances should not be publicly accessible to protect the integrity of data. Public accessibility of DB instances can be modified by turning on or off the Public accessibility parameter. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service.\n4. Select the RDS instance reported in the alert, Click on 'Modify' \n5. Under 'Network and Security', update the value of 'public accessibility' to 'No' and Click on 'Continue'\n6. Select required 'Scheduling of modifications' option and click on 'Modify DB Instance'." "```config from cloud.resource where cloud.type = 'aws' and api.name= 'aws-rds-db-cluster-snapshots' AND json.rule = dbclusterSnapshotAttributes[?any( attributeName equals restore and attributeValues[*] contains ""all"" )] exists```","AWS RDS Cluster snapshot is accessible to public This policy identifies AWS RDS Cluster snapshots which is accessible to public. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to setup and manage databases. If RDS Cluster snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service\n4. Click on 'Snapshots'\n5. Under 'Manual' tab select the reported RDS Cluster\n6. Click on 'Actions' and select 'Share snapshot'\n7. Under 'DB snapshot visibility' select 'Private'\n8. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-ec2-ebs-encryption' AND cloud.region IN ( 'AWS Ohio' ) AND json.rule = ebsEncryptionByDefault is false```,"Roman - AWS EBS volume region with encryption is disabled - Revised for This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-method' AND json.rule = authorizationType contains NONE```,"AWS API gateway request authorisation is not set This policy identifies AWS API Gateways of protocol type REST for which the request authorisation is not set. The method request for API gateways takes the client input that is passed to the back end through the integration request. It is recommended to add authorization type to each of the method to add a layer of protection. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS management console\n2. Navigate to 'API Gateway' service\n3. Select the region for which the API gateway is reported.\n4. Find the alerted API by the API gateway ID which is the first part of reported resource and click on it\n5. Navigate to the reported method\n6. Click on the clickable link of 'Method Request'\n7. Under section 'Settings', click on the pencil symbol for 'Authorization' field\n8. From the dropdown, Select the type of Authorization as per the requirement \n9. Click on the tick symbol next to it to save the changes." "```config from cloud.resource where api.name = 'azure-dns-recordsets' AND json.rule = type contains CNAME and properties.CNAMERecord.cname contains ""azurewebsites.net"" as X; config from cloud.resource where api.name = 'azure-app-service' as Y; filter 'not ($.Y.properties.hostNames contains $.X.properties.CNAMERecord.cname) '; show X;```","Azure DNS Zone having dangling DNS Record vulnerable to subdomain takeover associated with Web App Service This policy identifies DNS records within an Azure DNS zone that point to Azure Web App Services that no longer exist. A dangling DNS attack happens when a DNS record points to a cloud resource that has been deleted or is inactive, making the subdomain vulnerable to takeover. An attacker can exploit this by creating a new resource with the same name and taking control of the subdomain to serve malicious content. This allows attackers to host harmful content under your subdomain, which could lead to phishing attacks, data breaches, and damage to your reputation. The risk arises because the DNS record still references a non-existent resource, which unauthorized individuals can re-associate with their own resources. As a security best practice, it is recommended to routinely audit DNS zones and remove or update DNS records pointing to non-existing Web App Services. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal and search for 'DNS zones'\n2. Select 'DNS zones' from the search results\n3. Select the DNS zone associated with the reported DNS record\n4. On the left-hand menu, under 'DNS Management,' select 'Recordsets'\n5. Locate and select the reported DNS record\n6. Update or remove the DNS Record if no longer necessary." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and (masterAuth.clientKey exists or masterAuth.clientCertificate exists)```,"GCP Kubernetes Engine Cluster Client Certificate is not disabled This policy identifies Kubernetes Engine clusters that have enabled Client Certificate authentication. A client certificate is a base64-encoded public certificate used by clients to authenticate to the cluster endpoint. GKE manages authentication via gcloud using the OpenID Connect token method, setting up the Kubernetes configuration, getting an access token, and keeping it up to date. So it is recommended not to enable Client Certificate authentication, to avoid additional management overhead of key management and rotation. Note: For GKE Autopilot clusters, legacy authentication methods cannot be used. Basic authentication is deprecated and has been removed in GKE 1.19 and later. Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication#legacy-auth This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Client Certificate can be disabled only at the time of the creation of clusters. So to fix this alert, create a new cluster with Client Certificate disabled and then migrate all required cluster data or containers from the reported cluster to this new cluster.\n\nTo create the cluster with Client Certificate disabled, perform the following steps:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Click on 'Clusters' (Left Panel)\n4. On page 'Kubernetes clusters', click on 'CREATE'\n5. Select the type of cluster by clicking on the 'CONFIGURE' button\n6. Select ‘Security’ tab (Left Panel)\n7. Under the 'Legacy security options' section, ensure 'Issue a client certificate' is not set\n8. Provide all required cluster data or containers from the reported cluster to this new cluster\n9. Click on 'CREATE' to create a new cluster\n10. Once the cluster is created, delete the alerted cluster to resolve the alert." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-file-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```,"Azure Storage account diagnostic setting for file is disabled This policy identifies Azure Storage account files that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account files. These logs provide valuable insights into the operations, performance, and security of the storage account files. As a best practice, it is recommended to enable diagnostic logs on all storage account files. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the file resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'." "```config from cloud.resource where cloud.accountgroup = 'Flowlog-sol' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""sol-test"" ```","Copy of Sol-test config policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = objectEventsEnabled is false```,"OCI Object Storage bucket does not emit object events This policy identifies the OCI Object Storage buckets that are disabled with object events emission. Monitoring and alerting on object events of bucket objects will help in identifying changes bucket objects. It is recommended that buckets should be enabled to emit object events. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Next to Emit Object Events, click Edit.\n5. In the dialog box, select EMIT OBJECT EVENTS (to enable).\n6. Click Save Changes.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals does not equal Microsoft.Network/publicIPAddresses/delete and properties.condition.allOf[?(@.field=='category')].['equals'] contains Administrative"" as X; count(X) less than 1```","Azure Activity Log Alert does not exist for Delete Public IP Address rule This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' as Y; filter '$.X.description.securityGroups[*] contains $.Y.groupId and $.Y.ipPermissions[*] is empty'; show X;```,"AWS Elastic Load Balancer (ELB) has security group with no inbound rules This policy identifies Elastic Load Balancers (ELB) which have security group with no inbound rules. A security group with no inbound rule will deny all incoming requests. ELB security groups should have at least one inbound rule, ELB with no inbound permissions will deny all traffic incoming to ELB; in other words, the ELB is useless without inbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on the security group, it will open Security Group properties in a new tab in your browser\n6. Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELB functional requirement\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case ""PowerState/running"" and ['properties.securityProfile'].['securityType'] equal ignore case ""TrustedLaunch"" and ['properties.securityProfile'].['uefiSettings'].['vTpmEnabled'] is false```","Azure Virtual Machine vTPM feature is disabled This policy identifies Virtual Machines that have Virtual Trusted Platform Module (vTPM) feature disabled. Virtual Trusted Platform Module (vTPM) provide enhanced security to the guest operating system. It is recommended to enable virtual TPM device on supported virtual machines to facilitate measured Boot and other OS security features that require a TPM. NOTE: This assessment only applies to trusted launch enabled virtual machines. You can't enable trusted launch on existing virtual machines that were initially created without it. To know more, refer https://docs.microsoft.com/azure/virtual-machines/trusted-launch?WT.mc_id=Portal-Microsoft_Azure_Security This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Select 'Configuration' under 'Settings' from left panel \nNOTE: Enabling vTPM will trigger an immediate SYSTEM REBOOT.\n5. On the 'Configuration' page, check 'vTPM' under 'Security type' section\n6. Click 'Save'." "```config from cloud.resource where api.name = 'gcloud-compute-target-ssl-proxy' as X; config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' as Y; filter ""$.X.sslPolicy does not exist or ($.Y.profile equals COMPATIBLE and $.Y.selfLink contains $.X.sslPolicy) or ( ($.Y.profile equals MODERN or $.Y.profile equals CUSTOM) and $.Y.minTlsVersion does not equal TLS_1_2 and $.Y.selfLink contains $.X.sslPolicy ) or ( $.Y.profile equals CUSTOM and ( $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_GCM_SHA256 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_GCM_SHA384 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_3DES_EDE_CBC_SHA ) and $.Y.selfLink contains $.X.sslPolicy ) ""; show X;```","GCP Load Balancer SSL proxy permits SSL policies with weak cipher suites This policy identifies GCP SSL Load Balancers that permit SSL policies with weak cipher suites. GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites. To prevent usage of insecure features, SSL policies should use at least TLS 1.2 with the MODERN profile; or the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the target SSL Proxy Load Balancer does not have any SSL policy configured, updating the proxy with either a new or an existing secured SSL policy is recommended.\n\nThe 'GCP default' SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the broadest range of insecure cipher suites and is not modifiable. If this SSL policy is attached to the target SSL Proxy Load Balancer, updating the proxy with a more secured SSL policy is recommended.\n\nTo create a new SSL policy, refer to the following URL:\nhttps://cloud.google.com/load-balancing/docs/use-ssl-policies#creating_ssl_policies\n\nTo modify the existing insecure SSL policy attached to the Target SSL Proxy:\n1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of page to view target proxies\n5. Go to 'TARGET PROXIES' tab and Click on the reported SSL target proxy\n6. Note the 'Backend service' name.\n7. Click on the hyperlink under 'In use by'\n8. Note the 'External IP address'\n9. Select Load Balancing (Left Panel) and click on the SSL load balancer with same name as previously noted 'Backend service' name.\n10. In frontend section, consider the rule where 'IP:Port' matches the previously noted 'External IP address'.\n11. Click on the 'SSL Policy' of the rule. This will take you to the alert causing SSL policy.\n12. Click on 'EDIT'\n13. Set 'Minimum TLS Version' to TLS 1.2 and set 'Profile' to Modern or Restricted.\n14. Alternatively, if you use the profile 'Custom', make sure that the following features are disabled:\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n15. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.minimumPasswordLength less than 14'```,"OCI IAM password policy for local (non-federated) users does not have minimum 14 characters This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a minimum of 14 characters in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Type the number in range 14-100 into the box below the text: MINIMUM PASSWORD LENGTH (IN CHARACTERS).\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(20,20) or destinationPortRanges[*] contains _Port.inRange(20,20) ))] exists```","Azure Network Security Group allows all traffic on FTP-Data (TCP Port 20) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on FTP-Data (TCP Port 20). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict FTP-Data solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="" ) and ( $.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="" ) and $.X.filter contains ""gce_route"" and ( $.X.filter contains ""protoPayload.methodName="" or $.X.filter contains ""protoPayload.methodName ="" ) and ( $.X.filter does not contain ""protoPayload.methodName!="" and $.X.filter does not contain ""protoPayload.methodName !="" ) and $.X.filter contains ""beta.compute.routes.patch"" and $.X.filter contains ""beta.compute.routes.insert""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for VPC network route patch and insert This policy identifies GCP accounts which do not have a log metric filter and alert for VPC network route patch and insert events. Monitoring network routes patching and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the patch and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gce_route"" AND protoPayload.methodName=""beta.compute.routes.patch"" OR protoPayload.methodName=""beta.compute.routes.insert""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equals ""Microsoft.Storage""```","Azure Storage account Encryption Customer Managed Keys Disabled This policy identifies Azure Storage account which has Encryption with Customer Managed Keys Disabled. By default all data at rest in Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts for better control on Storage account data. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and autoMinorVersionUpgrade is false and engine does not contain docdb and engine does not contain neptune```,"AWS RDS minor upgrades not enabled When Amazon Relational Database Service (Amazon RDS) supports a new version of a database engine, you can upgrade your DB instances to the new version. There are two kinds of upgrades: major version upgrades and minor version upgrades. Minor upgrades helps maintain a secure and stable RDS with minimal impact on the application. For this reason, we recommend that your automatic minor upgrade is enabled. Minor version upgrades only occur automatically if a minor upgrade replaces an unsafe version, such as a minor upgrade that contains bug fixes for a previous version. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enable RDS auto minor version upgrades.\n\n1. Go to the AWS console RDS dashboard.\n2. In the navigation pane, choose Instances.\n3. Select the database instance you wish to configure.\n4. From the 'Instance actions' menu, select Modify.\n5. Under the Maintenance section, choose Yes for Auto minor version upgrade.\n6. Select Continue and then Modify DB Instance.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-events-eventbus' AND json.rule = Policy does not exist```,"AWS EventBridge event bus with no resource-based policy attached This policy identifies AWS EventBridge event buses with no resource-based policy attached. AWS EventBridge is a serverless event bus service that enables businesses to quickly and easily integrate applications, services, and data across multiple cloud environments. By default, an EventBridge custom event bus lacks a resource-based policy associated with it, which allows principals in the account to access the event bus.  It is recommended to attach a resource based policy to the event bus to limit access scope to fewer entities. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To attach a resource based policy to the event bus, please follow the below steps:\n\n1. Log into the AWS console and navigate to the EventBridge dashboard\n2. In the left navigation pane, choose 'Event buses'\n3. Select the event bus reported\n4. Under the 'Permissions' tab, click on 'Manage permissions'\n5. Add the resource based policy JSON with permissions to grant on the event bus\n6. Click on 'Update'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case ""/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace""```","bboiko test 03 - policy This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'azure-event-hub-namespace' AND json.rule = properties.disableLocalAuth is false as X; config from cloud.resource where api.name = 'azure-event-hub' AND json.rule = properties.status equal ignore case ACTIVE and authorizationRules[*] is empty as Y; filter '$.Y.id contains $.X.name'; show Y;```,"Azure Event Hub Instance not defined with authorization rule This policy identifies Azure Event Hub Instances that are not defined with authorization rules. If the Azure Event Hub Instance authorization rule is not defined, there is a heightened risk of unauthorized access to the event hub data and resources. This could potentially lead to unauthorized data retrieval, tampering, or disruption of the event hub operations. Defining proper authorization rules helps mitigate these risks by controlling and restricting access to the event hub resources. As a best practice, it is recommended to define the least privilege security model access policies at Event Hub Instance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Event Hubs'\n3. Select the Event Hubs Namespace from the list which has the reported Event Hub instance.\n4. Click on 'Event Hubs' under 'Entities' section\n5. Click on the reported Event Hub instance\n6. Select 'Shared access policies' under 'Settings' section\n7. Click on '+Add'\n8. Enter 'Policy name' and the required access\n9. Click on 'Create'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""containers-kubernetes"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance"",""namespace""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Kubernetes Service This policy identifies IBM Cloud users with overly permissive Kubernetes Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Kubernetes Service'.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = 'iam.bindings[*] size greater than 0 and iam.bindings[*].members[*] any equal allAuthenticatedUsers'```,"GCP Storage buckets are publicly accessible to all authenticated users This policy identifies the buckets which are publicly accessible to all authenticated users. Enabling public access to Storage Buckets enables anybody with a web association to access sensitive information that is critical to business. Access over a whole bucket is controlled by IAM. Access to individual objects within the bucket is controlled by its ACLs. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Storage (Left Panel)\n3. Click Browse\n4. Choose the identified Storage bucket whose ACL needs to be modified\n5. Click on SHOW INFO PANEL button\n6. Check all the ACL groups and make sure that the none of them are set to 'allAuthenticatedUsers'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc' AND json.rule = classic_access is true```,"IBM Cloud Virtual Private Cloud (VPC) classic access is enabled This policy identifies IBM Virtual Private Cloud where access to classic resources are enabled. If the classic access is enabled one can access & communicate IBM Cloud classic infrastructure & network from the VPC. Classic access should be disabled initially. This is applicable to ibm cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Refer to https://cloud.ibm.com/docs/vpc?topic=vpc-deleting-vpc-resources&interface=ui to safely delete the affected VPC. Note- A VPC must be set up for classic access when it is created & it cannot be updated to add or remove classic access.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vnet-list' AND json.rule = ['properties.provisioningState'] equals Succeeded and (['properties.ddosProtectionPlan'].['id'] does not exist or ['properties.enableDdosProtection'] is false)```,"Azure Virtual network not protected by DDoS Protection Standard This policy identifies Virtual networks not protected by DDoS Protection Standard. Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns exhausting an application's resources, making the application unavailable to legitimate users. Azure DDoS Protection Standard provides enhanced DDoS mitigation features to defend against DDoS attacks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Virtual networks dashboard \n3. Click on the reported Virtual network\n4. Under the 'Settings', click on 'DDoS protection'\nNOTE: Before enabling DDoS Protection, If already no DDoS protection plan exist you need to configure one DDoS protection plan for your organization by following below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection#create-a-ddos-protection-plan\n5. Select 'Enable' for 'DDoS Protection Standard' and choose 'DDoS protection plan' from dropdown or enter DDoS protection plan resource ID.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine contains redis and autoMinorVersionUpgrade is false```,"AWS ElastiCache Redis cluster automatic version upgrade disabled This policy identifies the ElastiCache Redis clusters that do not have the auto minor version upgrade feature enabled. An ElastiCache Redis cluster is a fully managed in-memory data store used to cache frequently accessed data, reducing latency and improving application performance. Failure to enable automatic minor upgrades can leave your cache clusters vulnerable to security risks stemming from outdated software. It is recommended to enable automatic minor version upgrades on ElastiCache Redis clusters to receive timely patches and updates, reduce the risk of security vulnerabilities, and improve overall performance and stability. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console. Navigate to the ElastiCache Dashboard\n2. Click on 'Redis caches' under the 'Resources' section\n3. Select the reported Redis cluster\n4. Click on the 'Modify' button\n5. In the 'Modify' page, under the 'Maintenance' section\n6. Find the 'Auto upgrade minor versions' setting and click on 'Enable'\n7. Click on 'Preview changes'. Under 'Apply immediately', select 'Yes'\n8. Click on 'Modify'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = logConfig.fieldLogLevel is not member of ('ERROR','ALL')```","AWS AppSync has field-level logging disabled This policy identifies an AWS AppSync GraphQL API not configured with field-level logging with either 'ERROR' or 'ALL'. AWS AppSync is a managed GraphQL service that simplifies the development of scalable APIs. Field-level logging in AWS AppSync lets you capture detailed logs for specific fields in your GraphQL API. Without enabling field-level logging, the security monitoring and debugging capabilities may be compromised, increasing the risk of undetected threats and vulnerabilities. It is recommended to enable field-level logging to ensure granular visibility into API requests, aiding in security, and compliance with regulatory requirements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To turn on field-level logging on an AWS AppSync GraphQL API,\n\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. In the navigation pane, choose 'AWS AppSync' under the 'Front-end Web & Mobile' section.\n4. On the APIs page, choose the name of a reported GraphQL API.\n5. On your API's homepage, in the navigation pane, choose Settings.\n6. Under Logging, Turn on Enable Logs.\n7. Under Field resolver log level, choose your preferred field-level logging level Error or All according to your business requirements.\n8. Under Create or use an existing role, choose New role to create a new AWS Identity and Access Management (IAM) that allows AWS AppSync to write logs to CloudWatch. Or, choose the Existing role to select the Amazon Resource Name (ARN) of an existing IAM role in your AWS account.\n9. Choose Save.." ```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-global-web-acl-resource' AND json.rule =(webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webACLId'; show X;```,"AWS CloudFront attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS CloudFront attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, CloudFront attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. Note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'." "```config from cloud.resource where cloud.type = 'aws' AND cloud.service = 'Amazon EC2' AND api.name = 'aws-ec2-describe-instances' AND json.rule = securityGroups[*].groupName equals ""default"" as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = groupName equals ""default"" as Y; filter '$.X.securityGroups[*].groupId equals $.Y.groupId';show Y;```","Naveed instance-with-default-security-group This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-run-services-list' AND json.rule = status.conditions[?any(type equals Ready and status equals True)] exists and status.conditions[?any(type equals RoutesReady and status equals True)] exists and iamPolicy.bindings[?any(role equals roles/run.invoker and members is member of (allUsers, allAuthenticatedUsers))] exists```","GCP Cloud Run service is publicly accessible This policy identifies GCP Cloud Run services that are publicly accessible. Granting Cloud Run Invoker permission to 'allUsers' or 'allAuthenticatedUsers' allows anyone to access the Cloud Run service over internet. Such access might not be desirable if sensitive data is stored at the location. As security best practice it is recommended to remove public access and assign the least privileges to the GCP Cloud Run service according to requirements. Note: For public API/website Cloud Run service will permit 'Cloud Run Invoker' to 'allUsers'. Refer to the following link for common use cases of authentication to the Cloud Run service. Link: https://cloud.google.com/run/docs/authenticating/overview This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', refer to the following URL:\nhttps://cloud.google.com/run/docs/securing/managing-access#remove-principals." "```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = eventSelectors[?any( dataResources[?any( type contains ""AWS::S3::Object"" and values contains ""arn:aws:s3"")] exists and readWriteType is member of (""All"",""Writeonly"") and includeManagementEvents is true)] exists as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1```","AWS S3 Buckets with Object-level logging for write events not enabled This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_duration or settings.databaseFlags[?any(name contains log_duration and value contains off)] exists)""```","GCP PostgreSQL instance database flag log_duration is not set to on This policy identifies PostgreSQL database instances in which database flag log_duration is not set to on. Enabling the log_duration setting causes the duration of each completed statement to be logged. Monitoring the time taken to execute the queries can be crucial in identifying any resource-hogging queries and assessing the performance of the server. Further steps such as load balancing and the use of optimized queries can be taken to ensure the performance and stability of the server. It is recommended to set log_duration as on. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_duration' from the drop-down menu and set the value as 'on'\nOR\nIf the flag has been set to other than on, Under 'Customize your instance', In 'Flags' section choose the flag 'log_duration' and set the value as 'on'\n6. Click on 'DONE' and then 'SAVE'." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case ""normal"" and features.keyProtectEnabled is false```","IBM Cloud Kubernetes secrets data is not encrypted with bring your own key This policy identifies IBM Cloud kubernetes clusters for which secrets data have encryption using key protect disabled. Kubernetes Secret data is encoded in the base64 format and stored as plain text in etcd. Etcd is a key-value store used as a backing store for Kubernetes cluster state and configuration data. Storing Secrets as plain text in etcd is risky, as they can be easily compromised by attackers and used to access systems. It is recommended that secrets data is encrypted for better security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to your IBM Cloud console.\n2. To view the list of services that are available on IBM Cloud, click 'Catalog'.\n3. From the 'All Categories' navigation pane, click the 'Security' category.\n4. From the list of services, click the Key Protect tile.\n5. Select a service plan, and click Create to provision an instance of Key Protect in the\naccount, region, and resource group where you are logged in.\n6. To view a list of your resources, go to 'Menu > Resource List'.\n7. From your IBM Cloud resource list, select your provisioned instance of Key Protect.\n8. To create a new key, click 'Add +' and select the 'Create a key' window. Specify the\nkey's name and key type.\n9. When you are finished filling out the key's details, click 'Add key' to confirm.\n10. From the Clusters console, select the cluster that you want to enable encryption for.\n11. From the 'Overview' tab, in the 'Integrations > Key management service' section, click\n'Enable'.\n12. Select the 'Key management service instance' and 'Root key' that you want to use\nfor the encryption.\n13. Click 'Enable'.\n14. Verify that the KMS enablement process is finished. From the 'Summary > Master\nstatus' section, you can check the progress.\n15. After the KMS provider is enabled in the cluster, data and new secrets that\nare created in the cluster are automatically encrypted by using your root key.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" and ( gceSetup.metadata.proxy-mode equals ""mail"" or gceSetup.metadata.proxy-user-mail exists )```","GCP Vertex AI Workbench Instance JupyterLab interface access mode set to single user This policy identifies GCP Vertex AI Workbench Instances with JupyterLab interface access mode set to single user. Vertex AI Workbench Instance can be accessed using the web-based JupyterLab interface. Access mode controls the control access to this interface. Allowing access to only a single user could limit collaboration, increase chances of credential sharing, and hinder security audits and reviews of the resource. It is recommended to avoid single user access and make use of the service account access mode for workbench instances. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Access mode cannot be changed for existing Vertex AI Workbench Instances. A new Vertex AI Workbench instance should be created.\n\nTo create a new Vertex AI Workbench instance with access mode set to service account, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Select 'INSTANCES' tab\n5. Click 'CREATE NEW'\n6. Click 'ADVANCED OPTIONS'\n7. Configure the instance as required\n8. Go to 'IAM and security' tab\n9. Select 'Service account'\n10. Click 'CREATE'." ```config from cloud.resource where api.name = 'aws-elasticache-cache-clusters' as X; config from cloud.resource where api.name = 'aws-cache-engine-versions' as Y; filter 'not( $.X.engine equals $.Y.engine and $.Y.cacheEngineVersionDescription contains $.X.engineVersion)'; show X;```,"AWS ElastiCache cluster not using supported engine version This policy identifies AWS Elastic Redis or Memcache cluster not using the supported engine version. AWS ElastiCache simplifies deploying, operating, and scaling Redis and Memcached in-memory caches in the cloud. An ElastiCache cluster not using a supported engine version runs on outdated Redis or Memcached versions. These versions may be end-of-life (EOL) or lack current updates and patches from AWS. This exposes the cluster to unpatched vulnerabilities, compliance risks, and potential service instability. It is recommended to regularly update your ElastiCache clusters to the latest supported engine versions as recommended by AWS. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To Upgrade the AWS ElastiCache cluster perform the following actions:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on 'Redis caches' under the 'Resources' section\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box, Under the 'Cluster settings' section \n8. Select 'Engine version' from the drop down according to your requirements.\n9. select 'Parameter groups' family that is compatible with the new engine version.\n10. Click on 'Preview Changes'\n11. Select Yes checkbox under 'Apply Immediately' , to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\n12. Click on 'Modify'." ```config from cloud.resource where api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running as X; config from cloud.resource where api.name = 'azure-spring-cloud-app' AND json.rule = properties.provisioningState equals Succeeded and identity does not exist as Y; filter '$.X.name equals $.Y.serviceName'; show Y;```,"Azure Spring Cloud App system-assigned managed identity is disabled This policy identifies Azure Spring Cloud apps in which system-assigned managed identity is disabled. System-assigned managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the system-assigned managed identity to your Spring Cloud apps. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable system-assigned managed identity on an existing Azure Spring Cloud app, follow the below URL:\nhttps://docs.microsoft.com/en-in/azure/spring-cloud/how-to-enable-system-assigned-managed-identity." ```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false and resourceRecordSet[?any( type equals CNAME and resourceRecords[*].value contains s3-website )] exists as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not ($.X.resourceRecordSet[*].name intersects $.Y.bucketName)'; show X;```,"AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk associated with AWS S3 Bucket This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk associated with AWS S3 Bucket. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing S3 bucket will have a risk of these dangling domain entries being taken over by an attacker by creating a similar S3 bucket in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Identify DNS record entry pointing to a non-existing S3 bucket resource.\n\nTo remove DNS record entry, follow steps given in following URL:\nhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-network-acls' AND json.rule = ""entries[?any(egress equals false and ((protocol equals 6 and ((portRange.to equals 22 or portRange.to equals 3389 or portRange.from equals 22 or portRange.from equals 3389) or (portRange.to > 22 and portRange.from < 22) or (portRange.to > 3389 and portRange.from < 3389))) or protocol equals -1) and (cidrBlock equals 0.0.0.0/0 or ipv6CidrBlock equals ::/0) and ruleAction equals allow)] exists""```","AWS Network ACLs that allow ingress from 0.0.0.0/0 to remote server administration ports This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'websiteConfiguration exists'```,"AWS S3 buckets with configurations set to host websites This policy identifies AWS S3 buckets that are configured to host websites. To host a website on AWS S3 you should configure a bucket as a website. By frequently surveying these S3 buckets, you can ensure that only authorized buckets are enabled to host websites. Make sure to disable static website hosting for unauthorized S3 buckets. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console\n2. Goto S3 under Services\n3. Choose the reported bucket\n4. Goto Properties tab\n5. Click on Static website hosting\n6. Click on Disable website hosting\n7. Click on Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```,"test perf of AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"BikramTest-AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'attributes.accessLog.enabled is false'```,"AWS Elastic Load Balancer (Classic) with access log disabled This policy identifies Classic Elastic Load Balancers which have access log disabled. When Access log enabled, Classic load balancer captures detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable access logging for Elastic Load Balancer (Classic), follow below mentioned URL:\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is false)'; show X;```,"AWS EMR cluster is not enabled with data encryption at rest This policy identifies AWS EMR clusters for which data encryption at rest is not enabled. Encryption of data at rest is required to prevent unauthorized users from accessing the sensitive information available on your EMR clusters and associated storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. For encryption At Rest select the required encryption type ('S3 encryption'/'Local disk encryption'/both) and follow below link for enabling the same.\n8. https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.." "```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = (profile equals MODERN or profile equals CUSTOM) and minTlsVersion does not equal ""TLS_1_2"" as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter ""$.X.selfLink contains $.Y.sslPolicy""; show Y;```","GCP HTTPS Load balancer is configured with SSL policy having TLS version 1.1 or lower This policy identifies HTTPS Load balancers is configured with SSL policy having TLS version 1.1 or lower. As a best security practice, use TLS 1.2 as the minimum TLS version in your load balancers SSL security policies. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of the page to view target proxies\n5. Click on 'TARGET PROXIES' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'Load balancer'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select SSL policy that uses TLS 1.2 version or create a new SSL policy with TLS 1.2 as Minimum TLS version from the dropdown for 'SSL policy'\n11. Click on 'DONE'\n12. Click on 'UPDATE'." ```config from cloud.resource where api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and kmsKeyId exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;```,"AWS SageMaker notebook instance not encrypted using Customer Managed Key This policy identifies SageMaker notebook instances that are not encrypted using Customer Managed Key. SageMaker notebook instances should be encrypted with Amazon KMS Customer Master Keys (CMKs) instead of AWS managed-keys in order to have more granular control over the data-at-rest encryption/decryption process and meet compliance requirements. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance encryption can not be modified once it is created. You need to create a new notebook instance with encryption using a custom KMS key; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Permissions and encryption' section,\nFrom the 'Encryption key - optional' dropdown list, choose a custom KMS key for the new SageMaker notebook instance.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete dialog box, click the Delete button to confirm the action.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = ((purpose does not equal ENCRYPT_DECRYPT) or (purpose equals ENCRYPT_DECRYPT and primary.state equals ENABLED)) and iamPolicy.bindings[*].members contains allUsers or iamPolicy.bindings[*].members contains allAuthenticatedUsers```,"GCP KMS crypto key is anonymously accessible This policy identifies GCP KMS crypto keys that are anonymously accessible. Granting permissions to 'allUsers' or 'allAuthenticatedUsers' allows anyone to access the KMS key. As a security best practice, it is recommended not to bind such members to KMS IAM policy. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Granting/revoking access for the KMS key is only supported by CLI. To remediate run the below CLI command. \n\n1. List all the cryptokeys which has overly permissive IAM bindings,\n\ngcloud asset search-all-iam-policies --asset-types=cloudkms.googleapis.com/CryptoKey --query=""policy:(allUsers OR allAuthenticatedUsers)"" \n\n2. Remove IAM policy binding for a KMS key to remove access to allUsers and allAuthenticatedUsers using the below command.\n\ngcloud kms keys remove-iam-policy-binding [key_name] --keyring='[key_ring_name]' --location='[location]' --member='[allUsers/allAuthenticatedUsers]' --role='[role]'\n\nRefer to the following URL for more information on “remove-iam-policy-binding” command.\nhttps://cloud.google.com/sdk/gcloud/reference/projects/remove-iam-policy-binding." ```config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = parameters.log_min_duration_statement.ParameterValue does not exist or parameters.log_min_duration_statement.ParameterValue equals -1 as X; config from cloud.resource where api.name= 'aws-rds-db-cluster' AND json.rule = status contains available and engine contains postgres as Y; filter '$.X.DBClusterParameterGroupName equals $.Y.dbclusterParameterGroup'; show Y;```,"AWS RDS Postgres Cluster does not have query logging enabled This policy identifies RDS Postgres clusters with query logging disabled. In AWS RDS PostgreSQL, by default, the logging level captures login failures, fatal server errors, deadlocks, and query failures. To log data changes, we recommend enabling cluster logging for monitoring and troubleshooting. To obtain adequate logs, an RDS cluster should have log_statement and log_min_duration_statement parameters configured. It is a best practice to enable additional RDS cluster logging, which will help in data change monitoring and troubleshooting. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify the custom DB cluster parameter group to enable query logging, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Parameter groups'.\n3. In the list, choose the above-created parameter group that you want to modify.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the value of the 'log_min_duration_statement parameter' to any value other than -1 you want to modify.\n6. Change the value of 'log_statement' according to the requirements.\n7. Choose 'Save Changes'.\n8. Reboot the primary (writer) DB instance in the cluster to apply the changes to it.\n9. Then reboot the reader DB instances to apply the changes to them.\n\nPlease create a custom parameter group if the cluster has only the default parameter group using the following steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Parameter groups'.\n3. Choose 'Create parameter group'. The Create parameter group window appears.\n4. In the Parameter group family list, select a 'DB parameter group family'.\n5. In the Type list, select 'DB cluster parameter group'.\n6. In the Group name box, enter the name of the new DB cluster parameter group.\n7. In the Description box, enter a description for the new DB cluster parameter group.\n8. Choose 'Create'.\n\nTo modify an RDS cluster to use the custom parameter group, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify.\n3. Choose 'Modify'. The Modify DB instance page appears.\n4. Under 'Additional Configuration', select the above-created cluster parameter group from the DB parameter group dropdown.\n5. Choose 'Continue' and check the summary of modifications.\n6. (Optional) Choose 'Apply immediately' to apply the changes immediately. Choosing this option can cause downtime in some cases.\n7. On the confirmation page, review your changes. If they are correct, choose 'Modify DB instance' to save your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and config.siteAuthEnabled equals false'```,"Azure App Service Web app authentication is off Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app. If an anonymous request is received from a browser, App Service will redirect to a logon page. To handle the logon process, a choice from a set of identity providers can be made, or a custom authentication mechanism can be implemented. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under the Setting section, Click on 'Authentication / Authorization'\n a. In case the Identity Provider is not configured: https://learn.microsoft.com/en-gb/azure/app-service/overview-authentication-authorization#identity-providers \n b. In case the identity Provider is configured and disabled:\n i. Edit Authentication Settings\n ii. Set 'App Service Authentication' to 'Enabled'\n iii. Choose other parameters as per your requirement and Click on 'Save'." ```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```,"tbsjmfcdgf_ui_auto_policies_tests_name rjyyqylxvc_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-ec2-describe-snapshots' AND json.rule='createVolumePermissions[*].group contains all' ```,"PCSUP-22910-Policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id is member of (""crn:v1:bluemix:public:iam::::role:Administrator"",""crn:v1:bluemix:public:iam::::serviceRole:Manager"") )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""cloud-object-storage"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and (attributes[?any( name is member of (""resource"",""resourceGroupId"",""serviceInstance"",""prefix""))] does not exist or attributes[?any( name equal ignore case ""resourceType"" and value equal ignore case ""bucket"" )] exists ) )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Cloud object storage buckets This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for cloud object storage service. IBM Cloud Object Storage is a highly scalable, resilient, and secure managed data storage service on the IBM Cloud platform that offers an alternative to traditional block and file storage solutions. When a Service ID having a policy with admin rights on object storage gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID that is reported and that you want to edit access to.\n4. Under the 'Access' tab, go to the 'Access policies' section and click on the three dots on the right corner of a row for the policy that has administrator permission on the 'IBM Cloud Object Storage' service.\n5. Click on Remove or Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to edit or remove, and confirm by clicking Save or Remove.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.createdrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletedrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatedrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createdrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletedrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatedrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.changeinternetgatewaycompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deleteinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updateinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.changelocalpeeringgatewaycompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createlocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletelocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatelocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.natgateway.changenatgatewaycompartment and condition.eventType[*] contains com.oraclecloud.natgateway.createnatgateway and condition.eventType[*] contains com.oraclecloud.natgateway.deletenatgateway and condition.eventType[*] contains com.oraclecloud.natgateway.updatenatgateway and condition.eventType[*] contains com.oraclecloud.servicegateway.attachserviceid and condition.eventType[*] contains com.oraclecloud.servicegateway.changeservicegatewaycompartment and condition.eventType[*] contains com.oraclecloud.servicegateway.createservicegateway and condition.eventType[*] contains com.oraclecloud.servicegateway.deleteservicegateway.begin and condition.eventType[*] contains com.oraclecloud.servicegateway.deleteservicegateway.end and condition.eventType[*] contains com.oraclecloud.servicegateway.detachserviceid and condition.eventType[*] contains com.oraclecloud.servicegateway.updateservicegateway ) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for network gateways changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Network Gateways changes. This policy includes Internet Gateways, Dynamic Routing Gateways, Service Gateways, Local Peering Gateways, and NAT Gateways. Monitoring and alerting on changes to Network Gateways will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Network Gateways. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting DRG – Create, DRG – Delete, DRG – Update, DRG Attachment – Create, DRG Attachment – Delete, DRG Attachment – Update, Internet Gateway – Create, Internet Gateway – Delete, Internet Gateway – Update, Internet Gateway – Change Compartment, Local Peering Gateway – Create, Local Peering Gateway – Delete, Local Peering Gateway – Update, Local Peering Gateway – Change Compartment, NAT Gateway – Create, NAT Gateway – Delete, NAT Gateway – Update, NAT Gateway – Change Compartment, Service Gateway – Create, Service Gateway – Delete Begin, Service Gateway – Delete End, Service Gateway – Update, Service Gateway – Attach Service, Service Gateway – Detach Service, Service Gateway – Change Compartment\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = variable[?any(properties.isEncrypted is false)] exists```,"Azure Automation account variables are not encrypted This policy identifies Automation accounts variables that are not encrypted. Variable assets are values that are available to all runbooks and DSC configurations in your Automation account. When a variable is created, you can specify that it be stored encrypted. Azure Automation stores each encrypted variable securely. It is recommended to enable encryption of Automation account variable assets when storing sensitive data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Automation Accounts'\n3. Click on the reported Automation Account\n4. Select 'Variables' under 'Shared Resources' from left panel \nNOTE: If you have Automation account variables storing sensitive data that are not already encrypted, then you will need to delete them and recreate them as encrypted variables.\n5. Delete the unencrypted variables and recreate them by setting the option 'Encrypted' as 'Yes'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-document' AND json.rule = accountSharingInfoList[*].accountId equal ignore case ""all""```","AWS SSM documents are public This policy identifies list of SSM documents that are public and might allow unintended access. A public SSM document can expose valuable information about your account, resources, and internal processes. It is recommended to only share SSM documents to only few private AWS accounts based on the requirement. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To make an SSM document private follow the steps mentioned in below URL:\n1.Go to the AWS console Systems Manager Dashboard.\n2.If the AWS Systems Manager home page opens first, choose the menu icon to open the navigation pane, and then choose Documents in the navigation pane.\n\n3.In the documents list, choose the document you want to stop sharing, and then choose details. On the Permissions tab, verify that you're the document owner. Only a document owner can stop sharing a document.\n4.Choose Edit.\n5.Select Private option, and enter AWS accountId only with which this document can be shared(leave it blank if not willing to share now). \n6.Choose Save." "```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-sagemaker-endpoint-config' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; config from cloud.resource where api.name = 'aws-sagemaker-endpoint' AND json.rule = endpointStatus does not equal ""Failed"" as Z; filter '($.X.KmsKeyId does not exist or (($.X.KmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled) and $.X.KmsKeyId equals $.Y.keyMetadata.arn)) and ($.X.EndpointConfigName equals $.Z.endpointConfigName)' ; show X;```","AWS SageMaker endpoint data encryption at rest not configured with CMK This policy identifies AWS SageMaker Endpoints not configured with data encryption at rest. AWS SageMaker Endpoint configuration defines the resources and settings for deploying machine learning models to SageMaker endpoints. By default, SageMaker encryption uses transient keys if a KMS key is not specified, which does not provide the control and management benefits of AWS Customer Managed KMS Key. Enabling the encryption helps protect the integrity and confidentiality of the data on the storage volume attached to the ML compute instance that hosts the endpoint. It is recommended to set encryption at rest to mitigate the risk of unauthorized access and potential data breaches. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To ensure that SageMaker endpoint configuration with data encryption using the KMS key, you must create a new EndpointConfig by cloning the existing endpoint configuration used by the endpoint and update it with the required changes.\n\n1. Sign in to the AWS Management Console.\n2. Go to the SageMaker service dashboard at https://console.aws.amazon.com/sagemaker/.\n3. In the navigation panel, under Inference, choose Endpoint configurations.\n4. Select the SageMaker endpoint that is reported, Click on clone on top right corner.\n5. Give a name to the Endpoint configuration and choose the Encryption key. For AWS Managed Keys, enter a KMS key ARN. For customer-managed keys, choose one from the drop-down.\n6. Click Create endpoint configuration.\n\nTo update the endpoint using the endpoint configuration:\n\n1. Sign in to the AWS Management Console.\n2. Go to the SageMaker service dashboard at https://console.aws.amazon.com/sagemaker/.\n3. In the navigation panel, under Inference, choose Endpoints.\n4. Select the SageMaker endpoint that you want to examine, then click on it to access the resource configuration details under the ""settings"" tab.\n5. Scroll down to Endpoint Configuration Settings and click Change.\n6. choose to ""use an existing endpoint configuration"" and select the Endpoint configuration which is created earlier with encryption key specified.\n7. Click ""Select endpoint configuration"" and click ""Update Endpoint"" for changes to propagate.." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1433,1433)""```","Alibaba Cloud Security group allow internet traffic to MS SQL port (1433) This policy identifies Security groups that allow inbound traffic on MS SQL port (1433) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1433, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.email is empty)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```,"Azure Microsoft Defender for Cloud security contact additional email is not set This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has not set security contact additional email addresses. Microsoft Defender for Cloud emails the subscription owners whenever a high-severity alert is triggered for their subscription. Providing a security contact email address as an additional email address ensures that the proper people are aware of any potential compromise in order to mitigate the risk in a timely fashion. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. Enter a valid security contact email address (or multiple addresses separated by commas) in the 'Additional email addresses' field\n7. Select 'Save'." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains ""roles/editor"" or roles[*] contains ""roles/owner"" as X; config from cloud.resource where api.name = 'gcloud-cloud-function' as Y; filter '$.Y.serviceAccountEmail equals $.X.user'; show Y;```","GCP Cloud Function has risky basic role assigned This policy identifies GCP Cloud Functions configured with the risky basic role. Basic roles are highly permissive roles that existed prior to the introduction of IAM and grant wide access over project to the grantee. To reduce the blast radius and defend against privilege escalations if the Cloud Function is compromised, it is recommended to follow the principle of least privilege and avoid use of basic roles. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: It is recommended to the principle of least privilege for granting access.\n\nTo assign desired service account to the cloud funtion, please refer to the URL given below:\nhttps://cloud.google.com/functions/docs/securing/function-identity#individual\n\nTo update priviledges granted to a service account, please refer to the URL given below:\nhttps://cloud.google.com/iam/docs/granting-changing-revoking-access." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = domainProcessingStatus equal ignore case active and (logPublishingOptions does not exist or logPublishingOptions.AUDIT_LOGS.enabled is false)```,"AWS Opensearch domain audit logging disabled This policy identifies AWS Opensearch domains with audit logging disabled. Opensearch audit logs enable you to monitor user activity on your Elasticsearch clusters, such as authentication successes and failures, OpenSearch requests, index updates, and incoming search queries. It is recommended to enable audit logging for an Elasticsearch domain to audit activity in the domain. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS Opensearch domain with audit logs:\n\n1. Sign into the AWS console and navigate to the Opensearch Service Dashboard\n2. In the navigation pane, under 'Managed Clusters', select 'Domains'\n2. Choose the reported Elasticsearch domain\n3. On the Logs tab, select 'Audit logs' and choose 'Enable'.\n4. In the 'Set up audit logs' section, in the 'Select log group from CloudWatch logs' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n5. In 'Specify CloudWatch access policy', create new/Select an existing policy as per your requirement\n6. Click on 'Enable'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and settings.databaseFlags[*].name does not contain ""user connections""'```","GCP SQL server instance database flag user connections is not set This policy identifies GCP SQL server instances where the database flag 'user connections' is not set. The user connections option specifies the maximum number of simultaneous user connections (value varies in range 10-32,767) that are allowed on an instance of SQL Server. The default is 0, which means that the maximum (32,767) user connections are allowed. It is recommended to set database flag user connections for SQL Server instance according to organization-defined value. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on the reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in the 'New database flag' section, choose the flag 'user connections' from the drop-down menu, and set the value an appropriate value(10-32,767)\n6. Click on DONE\n7. Click on SAVE \n8. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'\n." ```config from cloud.resource where cloud.type = 'azure' aND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists```,"mkurter-testing--0002 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND cloud.accountgroup NOT IN ( 'AWS' ) AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists```,"mkurter-testing-pcf-azure This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = ""_DateTime.ageInDays(createTime) > 90""```","GCP API key not rotating in every 90 days This policy identifies GCP API keys for which the creation date is aged more than 90 days. Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services'\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. Click on 'REGENERATE KEY' to rotate the API key\n5. On the pop-up window click on 'REPLACE KEY'\n6. Validate the Creation date once it is updated.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and activeServicesCount equals 0```,"AWS ECS cluster not configured with active services This policy identifies ECS clusters that are not configured with active services. ECS service enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. It is recommended to remove Idle ECS clusters to reduce the container attack surface or create new services for the reported ECS cluster. For details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete the reported idle ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/delete_cluster.html\n\nTo create new container services follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = (['properties.sslPolicy'] does not exist and ['properties.defaultPredefinedSslPolicy'] does not equal ignore case AppGwSslPolicy20220101) or (['properties.sslPolicy'].['policyType'] equal ignore case Predefined and (['properties.sslPolicy'].['policyName'] equal ignore case AppGwSslPolicy20150501 or ['properties.sslPolicy'].['policyName'] equal ignore case AppGwSslPolicy20170401)) or (['properties.sslPolicy'].['policyType'] equal ignore case Custom and (['properties.sslPolicy'].['minProtocolVersion'] equal ignore case TLSv1_0 or ['properties.sslPolicy'].['minProtocolVersion'] equal ignore case TLSv1_1))```,"Azure Application Gateway is configured with SSL policy having TLS version 1.1 or lower This policy identifies the Application Gateway instances that are configured to use TLS versions 1.1 or lower as the minimum protocol version. The Application Gateway supports SSL encryption using multiple TLS versions and by default, it supports TLS version 1.0 as the minimum version. As a best practice set the minimum protocol version to TLSv1.2 or more (if you use custom SSL policy) or use the predefined policy use policy which has support TLSv1.2 or more. For more details: https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-ssl-policy-overview This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set the SSL policy with TLSv1.2 or more, refer below URL:\nhttps://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-configure-listener-specific-ssl-policy." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains functionapp and kind does not contain workflowapp and kind does not equal app and config.siteAuthEnabled is false```,"Azure Function App authentication is off This policy identifies Azure Function App which has set authentication to off. Azure Function App Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app. If an anonymous request is received from a browser, Function App will redirect to a logon page. To handle the logon process, a choice from a set of identity providers can be made, or a custom authentication mechanism can be implemented. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Settings section, Click on 'Authentication'\n5. Click on 'Add identity provider'\n6. Select an identity provider from the dropdown and choose other parameters as per your requirement\n7. Click on 'Add'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (encryptionAtRestOptions.enabled is false or encryptionAtRestOptions does not exist)'```,"AWS Elasticsearch domain Encryption for data at rest is disabled This policy identifies Elasticsearch domains for which encryption is disabled. Encryption of data at rest is required to prevent unauthorized users from accessing the sensitive information available on your Elasticsearch domains components. This may include all data of file systems, primary and replica indices, log files, memory swap files and automated snapshots. The Elasticsearch uses AWS KMS service to store and manage the encryption keys. It is highly recommended to implement encryption at rest when you are working with production data that have sensitive information, to protect from unauthorized access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Enabling the encryption feature on existing domains requires Elasticsearch 6.7 or later. If your Elasticsearch 6.7 or later, follow below steps to enable encryption on existing Elasticsearch domain:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Click on 'Actions' button, from drop-down select 'Modify encryptions'\n6.In Modify encryptions page, Select the 'Enable encryption of data at rest' checkbox and Choose KMS key as per your requirement. It is recommended to choose KMS CMKs instead of default KMS [Default(aws/es)]; to get more grannular control on your Elasticsearch domain data.\n7. Click on 'Submit'.\n\nIf your Elasticsearch is less than 6.7 version, then AWS Elasticsearch Domain encryption can be set only at the time of the creation of domain. So to fix this alert, create a new domain with encryption using KMS Keys and then migrate all required Elasticsearch domain data from the reported Elasticsearch domain to this newly created domain.\nTo set up the new Elasticsearch domain with encryption using KMS Key, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html\n\nTo delete reported ES domain, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-deleting.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```,"Copy of AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is true```,"Azure Key Vault Key has no expiration date (RBAC Key vault) This policy identifies Azure Key Vault keys that do not have an expiration date for the RBAC Key vaults. As a best practice, set an expiration date for each key and rotate your keys regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].id' | xargs -I {} az role assignment create --assignee """" --role ""Key Vault Reader"" --scope {} This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Key vaults'.\n3. Select the Key vault where the key is stored.\n4. Select 'Keys', and select the key that you need to modify.\n5. Select the current version.\n6. Set the expiration date.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ""$.serverBlobAuditingPolicy.properties.retentionDays does not exist or $.serverBlobAuditingPolicy.properties.state equals Disabled""```","Azure SQL Server auditing is disabled Audit logs can help you find suspicious events, unusual activity, and trends to analyze database events. Auditing the SQL Server, at the server-level, enables you to track all new and existing databases on the server. This policy identifies SQL servers do not have auditing enabled. As a best practice, enable auditing on each SQL server so that the database are audited, regardless of the database auditing settings. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal.\n2. Select 'SQL servers', and select the SQL server instance you want to modify.\n3. Select 'Auditing', and set the status to 'On'.\n4. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists as X; count(X) less than 1```,"AWS Security Hub is not enabled This policy identifies the AWS Security Hub that is not enabled in specific regions. AWS Security Hub is a centralized security management service by Amazon Web Services, providing a comprehensive view of your security posture and automating security checks across AWS accounts. Failure to enable AWS Security Hub in all regions may lead to limited visibility and compromised threat detection across your AWS environment. It is recommended to enable AWS Security Hub in all regions for consistent visibility and enhanced threat detection across your AWS environment. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS Security Hub, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Security, Identity, & Compliance', select 'Security Hub'\n4. When you open the Security Hub console for the first time, choose 'Go to Security Hub'\n5. On the welcome page, the 'Security standards' section lists the security standards that Security Hub supports\n6. Select the check box for a standard to enable it\n8. Choose 'Enable Security Hub'." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.encryption.status equal ignore case disabled```,"test c p This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' as X; config from cloud.resource where api.name = 'gcloud-dns-policy' as Y; filter 'not($.Y.networks[*].networkUrl contains $.X.name and $.Y.enableLogging is true)'; show X;```,"GCP VPC network not configured with DNS policy with logging enabled This policy identifies the GCP VPC networks which are not configured with DNS policy with logging enabled. Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC. These logs can be monitored for anomalous domain names and evaluated against threat intelligence. It is recommended to enable DNS logging for all the VPC networks. Note: For full capture of DNS, firewall must block egress UDP/53 (DNS) and TCP/443 (DNSover HTTPS) to prevent client from using external DNS name server for resolution. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To add DNS server policy with logging to a VPC network,\n\n1. Login to GCP console\n2. Navigate to service 'VPC network'(Left Panel)\n3. Click on the alerting VPC network\n4. Click on 'EDIT'\n5. Under 'DNS server policy' dropdown, select an available service policy or 'create a new server policy' as required\nLink: https://cloud.google.com/dns/docs/policies#creating \n6. Click on 'SAVE'\nTo enable logging to a DNS policy that is attached to a VPC follow the below reference,\n\n1. Login to GCP console\n2. Navigate to service 'VPC network'(Left Panel)\n3. Click on the alerting VPC network\n4. Click on the attached 'DNS server policy'\n5. Click on 'EDIT POLICY'\n6. Under section 'Logs' select 'On'\n7. Click on 'SAVE'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = (properties.roleDefinition.properties.type equals CustomRole and (properties.roleDefinition.properties.permissions[?any((actions[*] equals Microsoft.Authorization/locks/delete and actions[*] equals Microsoft.Authorization/locks/read and actions[*] equals Microsoft.Authorization/locks/write) or actions[*] equals Microsoft.Authorization/locks/*)] exists) and (properties.roleDefinition.properties.permissions[?any(notActions[*] equals Microsoft.Authorization/locks/delete or notActions[*] equals Microsoft.Authorization/locks/read or notActions[*] equals Microsoft.Authorization/locks/write or notActions[*] equals Microsoft.Authorization/locks/*)] does not exist)) as X; count(X) less than 1```,"liron test custom policy #3 run + build policy This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.Policy.Statement[?any(Effect equals Allow and (Action anyStartWith sqs: or Action anyStartWith SQS:) and (Principal.AWS contains * or Principal equals *) and Condition does not exist)] exists```,"AWS SQS queue access policy is overly permissive This policy identifies Simple Queue Service (SQS) queues that have an overly permissive access policy. It is highly recommended to have the least privileged access policy to protect the SQS queue from data leakage and unauthorized access. For more details: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to Simple Queue Service (SQS) dashboard\n4. Choose the reported Simple Queue Service (SQS) and choose 'Edit'\n5. Scroll to the 'Access policy' section\n6. Edit the access policy statements in the input box, Make sure the 'Principal' is not set to '*', which makes your SQS queues accessible to any anonymous users.\n7. When you finish configuring the access policy, choose 'Save'.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'accessKeys[*] size > 1 and accessKeys[*].status all equal Active'```,"Alibaba Cloud RAM user has more than one active access keys This policy identifies Resource Access Management (RAM) users who have more than one active access keys. RAM users having more than one key can lead to increased chances of accidental exposure. As a best security practice, it is recommended to delete unused access keys. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Select the reported user\n5. In the 'Authentication' tab, under 'User AccessKeys'\n6. In the list of access keys, Make a note on the access keys which is not used or not required as per your requirements.\n7. Click on 'Delete'\n8. On the 'Delete AccessKey' popup window, select 'I am aware of the risk and confirm that the deletion' and click on 'Close'.." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5432,5432)""```","Alibaba Cloud Security group allow internet traffic to PostgreSQL port (5432) This policy identifies Security groups that allow inbound traffic on PostgreSQL port (5432) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5432, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""containers-kubernetes"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance"",""namespace""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Kubernetes Service This policy identifies IBM Cloud Service IDs with overly permissive Kubernetes Administrative role. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, and click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'Kubernetes Service'.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." "```config from cloud.resource where api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.kind equals ""compute#metadata"" and commonInstanceMetadata.items[?any(key contains ""enable-oslogin"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and commonInstanceMetadata.items[?any(key contains ""ssh-keys"")] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and ( metadata.items[?any(key exists and key contains ""block-project-ssh-keys"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and metadata.items[?any(key exists and key contains ""enable-oslogin"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and name does not start with ""gke-"") as Y; filter '$.Y.zone contains $.X.name'; show Y;```","GCP VM instances have block project-wide SSH keys feature disabled This policy identifies VM instances which have block project-wide SSH keys feature disabled. Project-wide SSH keys are stored in Compute/Project-metadata. Project-wide SSH keys can be used to login into all the instances within a project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within a project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit button\n6. Under SSH Keys section, Check 'Block project-wide SSH keys' on the checkbox\n7. Click on Save." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'maxPasswordAge !isType Integer or maxPasswordAge > 90 or maxPasswordAge equals 0'```,"Alibaba Cloud RAM password policy does not expire in 90 days This policy identifies Alibaba Cloud accounts for which do not have password expiration set to 90 days or less. As a best practice, change your password every 90 days or sooner to ensure secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Validity Period' field, enter 90 or less based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='log_checkpoints')].properties.value equals OFF or configurations.value[?(@.name=='log_checkpoints')].properties.value equals off""```","Azure PostgreSQL database server with log checkpoints parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log checkpoints. Enabling log_checkpoints helps the PostgreSQL Database to Log each checkpoint in turn generates query and error logs. However, access to transaction logs is not supported. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_checkpoints' and set it to on\n6. Click on 'Save' button from top menu to save the change.." "```config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' AND json.rule = status does not equal ""Terminated"" as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-configuration-settings' AND json.rule = configurationSettings[*].optionSettings[?any( optionName equals ""StreamLogs"" and value equals ""false"" )] exists as Y; filter ' $.X.environmentName equals $.Y.configurationSettings[*].environmentName and $.X.applicationName equals $.Y.configurationSettings[*].applicationName'; show X;```","AWS Elastic Beanstalk environment logging not configured This policy identifies the Elastic Beanstalk environments not configured to send logs to CloudWatch Logs. An Elastic Beanstalk environment is a configuration of AWS resources where you can deploy your application. The environment logs refer to the logs generated by various components of your application, which can provide valuable insights into any errors or issues that may arise during operation. Failing to enable logging in an Elastic Beanstalk environment reduces visibility, hinders incident detection and response, and increases vulnerability to security breaches. It is recommended to configure AWS Elastic Beanstalk environments to send logs to CloudWatch to ensure security and meet compliance requirements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To stream Elastic Beanstalk environment logs to CloudWatch Logs,\n1. Sign in to the AWS console.\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Go to 'Elastic Beanstalk' service.\n4. In the navigation pane, choose 'Environments', then select the reported environment's name from the list.\n5. In the navigation pane, choose Configuration.\n6. In the 'Updates, monitoring, and logging' configuration category, choose Edit.\n7. Under 'Instance log streaming to CloudWatch Logs', Enable Log streaming by selecting the 'Activated' checkbox.\n8. Set 'Retention' to the number of days to save the logs.\n9. Select the 'Lifecycle' setting that determines whether the logs are saved after the environment is terminated according to your business requirements.\n10. To save the changes choose 'Apply' at the bottom of the page.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals username_password and state_description equal ignore case active and (_DateTime.ageInDays(last_update_date) > 90)'```,"IBM Cloud Secrets Manager user credentials have aged more than 90 days without being rotated This policy identifies IBM Cloud Secrets Manager user credentials that have aged more than 90 days without being rotated. User credentials should be rotated to ensure that data cannot be accessed with an old password which might have been lost, cracked, or stolen. It is recommended that user credentials are regularly rotated. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides, under security section.\n3. Select the secret and click on 'Actions' dropdown.\n4. Select 'Rotate' from the dropdown.\n5. In the 'Rotate secret' screen, provide data as required.\n6. Click on 'Rotate'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains SQLSERVER and settings.databaseFlags[?(@.name=='cross db ownership chaining')].value equals on""```","GCP SQL Server instance database flag 'cross db ownership chaining' is enabled This policy identifies GCP SQL Server instance database flag 'cross db ownership chaining' is enabled. Enabling cross db ownership is not recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you are aware of the security implications of this setting. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the SQL Server instance for which you want to disable the database flag from the list\n4. Click 'Edit'\n5. Go to 'Flags and Parameters' under 'Configuration options' section\n6. Search for the flag 'cross db ownership chaining' and set the value 'off'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-insights-component' AND json.rule = properties.provisioningState equals Succeeded and (properties.DisableLocalAuth does not exist or properties.DisableLocalAuth is false)```,"Azure Application Insights not configured with Azure Active Directory (Azure AD) authentication This policy identifies Application Insights that are not configured with Azure Active Directory (AAD) authentication and are enabled with local authentication. Disabling local authentication and using AAD-based authentication enhances the security and reliability of the telemetry used to make both critical operational and business decisions. It is recommended to configure the Application Insights with Azure Active Directory (AAD) authentication so that all actions are strongly authenticated. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Azure Active Directory (AAD) authentication and disable local authentication on existing Application Insights, follow the below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-ad-authentication." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = kind equal ignore case OpenAI and properties.provisioningState equal ignore case Succeeded and (properties.restrictOutboundNetworkAccess does not exist or properties.restrictOutboundNetworkAccess is false or (properties.restrictOutboundNetworkAccess is true and properties.allowedFqdnList is empty))```,"Azure Cognitive Services account hosted with OpenAI is not configured with data loss prevention This policy identifies Azure Cognitive Services accounts hosted with OpenAI that are not configured with data loss prevention. Azure AI services offer data loss prevention capabilities that allow customers to configure the list of outbound URLs their Azure AI services resources can access. As a best practice, it is recommended to enable the data loss prevention feature in OpenAI-hosted Azure Cognitive Services accounts to prevent data loss. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable data loss prevention on existing Azure Cognitive Services account hosted with OpenAI, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-data-loss-prevention?tabs=azure-cli#enabling-data-loss-prevention." ```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-pipeline-job' as Y; filter ' $.Y.runtimeConfig.gcsOutputDirectory contains $.X.id '; show X;```,"GCP Storage Bucket storing GCP Vertex AI pipeline output data This policy identifies publicly exposed GCS buckets that are used to store GCP Vertex AI pipeline output data. GCP Vertex AI pipeline output data is stored in the Storage Bucket. This output data is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per the organization's security and compliance requirements. It is recommended to monitor, identify, and evaluate storage location for GCP Vertex AI pipeline output data regularly to prevent unauthorized access and AI model thefts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Review and validate the GCP Vertex AI pipeline output data is stored in the right Storage bucket. Move and/or delete the output data if it is found in an unexpected location. Review how the Vertex AI pipeline was configured to output to an unauthorised/unapproved storage bucket.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size greater than 0 and volume_attachments[*].type equals boot and encryption equal ignore case provider_managed```,"IBM Cloud OS disk is not encrypted with customer managed keys This policy identifies IBM Cloud OS disk attached to a virtual server instance which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A os disk(boot storage volume) can be encrypted with customer managed keys only at the time of creation of virtual server instance. Please\ncreate a snapshot for reported os data disk following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a virtual service instance with os disk from the above created snapshot with customer managed encryption:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-vol-ui\n\nOnce new virtual server instance got created, delete the virtual server instance to which reported os disk got attached:\nhttps://cloud.ibm.com/docs/hp-virtual-servers?topic=hp-virtual-servers-remove_vs#delete_vs\n\nNote: Please note deleting a virtual server instance is irreversible make sure to backup any required data.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-log-analytics-workspace' AND json.rule = properties.provisioningState equals Succeeded and (properties.publicNetworkAccessForQuery equals Enabled or properties.publicNetworkAccessForIngestion equals Enabled)```,"Azure Log Analytics workspace configured with overly permissive network access This policy identifies Log Analytics workspaces configured with overly permissive network access. Virtual networks access configuration in Log Analytics workspace allows you to restrict data ingestion and queries coming from the public networks. It is recommended to configure the Log Analytics workspace with virtual networks access configuration set to restrict, so that the Log Analytics workspace is accessible only to restricted Azure Monitor Private Link Scopes. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Log Analytics workspaces dashboard \n3. Click on the reported Log Analytics workspace\n4. Under the 'Settings' menu, click on 'Network Isolation'\n5. Create a Azure Monitor Private Link Scope if it is not already created by refering:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/logs/private-link-configure#create-an-azure-monitor-private-link-scope\n6. After creating, Under 'Virtual networks access configuration', \nSet 'Accept data ingestion from public networks not connected through a Private Link Scope' to 'No' and \nSet 'Accept queries from public networks not connected through a Private Link Scope' to 'No'\n7. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = tags[*] exists```,"Izabella config with tags test 1 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-virtual-desktop-workspace' AND json.rule = diagnostic-settings[?none( properties.logs[?any( enabled is true )] exists )] exists```,"Azure Virtual Desktop workspace diagnostic log is disabled This policy identifies Azure Virtual Desktop workspaces where diagnostic logs are not enabled. Diagnostic logs are vital for monitoring and troubleshooting Azure Virtual Desktop, which offers virtual desktops and remote app services. They help detect and resolve issues, optimize performance, and meet security and compliance standards. Without these logs, it’s difficult to track activities and detect anomalies, potentially jeopardizing security and efficiency. As a best practice, it is recommended to enable diagnostic logs for Azure Virtual Desktop workspaces. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Azure Virtual Desktop'\n2. Select 'Azure Virtual Desktop'\n3. Under 'Manage' select 'Workspaces'\n4. Select the reported Workspace\n5. Under 'Monitoring' select 'Diagnostic settings'\n6. Under Diagnostic settings tab. Click on '+ Add diagnostic setting' to create a new Diagnostic Setting\n7. Specify a 'Diagnostic settings name'\n8. Under section 'Categories', select the type of log that you want to enable\n9. Under section 'Destination details'\n a. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\n b. If you set 'Archive to storage account', select the 'Subscription' and 'Storage account'\n c. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n10. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.email is empty and alertNotifications equal ignore case Off)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```,"Azure Microsoft Defender for Cloud security alert email notifications is not set This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which have not set security alert email notifications. Enabling security alert emails ensures that security alert emails are received from Microsoft. This ensures that the right people are aware of any potential security issues and are able to mitigate the risk. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. Under 'Notification types', check the check box next to Notify about alerts with the following severity (or higher): and select High from the drop down menu\n7. Select 'Save'." "```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equal ignore case ""Microsoft.Keyvault"" as X; config from cloud.resource where api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists as Y; filter '$.Y.properties.vaultUri contains $.X.properties.encryption.keyvaultproperties.keyvaulturi'; show X;```","Azure Storage account encryption key is not rotated regularly This policy identifies Azure Storage accounts which are encrypted by an encryption key that is not rotated regularly. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Stroage account encryption key rotation; refer below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-configure-existing-account?tabs=azure-portal#configure-encryption-for-automatic-updating-of-key-versions\n\nNOTE: Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_ss_finding_1 Description-d63012c8-3c89-4ac2-ac4f-6c6523921d5f This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and engine equals aurora-postgresql and engineVersion is member of ('10.11','10.12','10.13','11.6','11.7','11.8')```","AWS Aurora PostgreSQL exposed to local file read vulnerability This policy identifies AWS Aurora PostgreSQL which are exposed to local file read vulnerability. AWS Aurora PostgreSQL installed with vulnerable 'log_fdw' extension is exposed to local file read vulnerability, due to which attacker could gain access to local system files of the database instance within their account, including a file which contained credentials specific to Aurora PostgreSQL. It is highly recommended to upgrade AWS Aurora PostgreSQL to the latest version. For more information, https://aws.amazon.com/security/security-bulletins/AWS-2022-004/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Amazon has deprecated affected versions of Aurora PostgreSQL and customers can no longer create new instances with the affected versions.\n\nTo upgrade the latest version of Amazon Aurora PostgreSQL, please follow below URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html\n." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-firehose-delivery-stream' AND json.rule = deliveryStreamEncryptionConfiguration exists and deliveryStreamEncryptionConfiguration.status equals DISABLED```,"AWS Kinesis Firehose with Direct PUT as source has SSE encryption disabled This policy identifies Amazon Kinesis Firehose with Direct PUT as source which has Server-side encryption (SSE) encryption disabled. Enabling Server Side Encryption allows you to meet strict regulatory requirements and enhance the security of your data at rest. As a best practice, enable SSE for the Amazon Kinesis Firehose. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Amazon Kinesis Service\n3. Click on 'Delivery streams'\n4. Select the reported Kinesis Firehose for the corresponding region\n5. Click on 'Configuration' tab\n6. Under Server-side encryption, Click on Edit\n7. Choose 'Enable server-side encryption for source records in delivery stream'\n8. Under 'Encryption type' select 'Use AWS owned CMK'\n9. Click 'Save changes'." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState equal ignore case running AND (platformConfig does not exist OR platformConfig equal ignore case ""null"" OR platformConfig.isSecureBootEnabled is false)```","OCI Compute Instance with Secure Boot disabled This policy identifies OCI compute instances in which Secure Boot is disabled. Secure Boot serves as a security standard ensuring that a machine exclusively boots using Original Equipment Manufacturer (OEM) trusted software. Without the activation of Secure Boot, a compute instance becomes susceptible to booting unauthorized or malicious software, posing a threat to the integrity and security of the instance. Consequently, this vulnerability can lead to unauthorized access, data breaches, or other malicious activities within the instance. As a security best practice, enabling Secure Boot on all compute instances is strongly recommended to guarantee the exclusive execution of trusted software during the boot process. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Secure Boot can only be enabled during resource creation. To fix this, you must terminate the reported instance and create a new one with Secure Boot enabled.\n\n1. Log in to the OCI Console.\n2. Switch to the Region of the reported resource from the Region drop-down in top-right corner.\n3. Type the reported compute instance name into the Search box at the top of the Console.\n4. Click on the reported compute instance from the search results.\n5. Click 'Terminate' to terminate the instance (decide whether to permanently delete the instance's attached boot volume).\n6. To recreate the compute instance with Secure Boot enabled, navigate to the instance creation page.\n7. Click 'Create Instance'.\n8. In the 'Image and Shape' section, select an Image and Shape that support Shielded Instance configuration, indicated by the shield icon.\n9. In the 'Security' section, click 'Edit'.\n10. Enable 'Shielded Instance', then activate the 'Secure Boot' toggle.\n11. Complete the remaining details as required.\n12. Click 'Create'.." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND resource.status = Active AND json.rule = tags[*].key none equal ""application"" AND tags[*].key none equal ""Application""```","pcsup-aws-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and serviceConfig.vpcConnector does not exist```,"GCP Cloud Function not enabled with VPC connector for network egress This policy identifies GCP Cloud Functions that are not enabled with a VPC connector for network egress. This includes both Cloud Functions v1 and Cloud Functions v2. Using a VPC connector for network egress in GCP Cloud Functions is crucial to prevent security risks such as data interception and unauthorized access. This practice strengthens security by allowing safe communication with private resources, enhancing traffic monitoring, reducing the risk of data leaks, and ensuring compliance with security policies. Note: For a Cloud Function to access public traffic using Serverless VPC Connector, Cloud NAT might be needed. Link: https://cloud.google.com/functions/docs/networking/network-settings#route-egress-to-vpc It is recommended to configure GCP Cloud Functions with a VPC connector. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings’ drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. Under Section 'Egress settings', select a VPC connector from the dropdown\n8. In case VPC connector is not available, either select 'Custom' and provide the name of the VPC Connector manually or click on 'Create a Serverless VPC Connector' and follow the link to create a Serverless VPC connector: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access\n9. Once the Serverless VPC connector is available, select it from the dropdown\n10. Select 'Route only requests to private IPs through the VPC connector' or 'Route all traffic through the VPC connector' as per your organization's policies.\n10. Click on 'NEXT'\n11. Click on 'DEPLOY'." "```config from cloud.resource where api.name = 'alibaba-cloud-ecs-instance' as X; config from cloud.resource where api.name = 'alibaba-cloud-ecs-security-group' as Y; filter ""$.X.publicIpAddress[*] is not empty and $.X.securityGroupIds[*] contains $.Y.securityGroupId and $.Y.permissions[?(@.policy=='Accept' && @.direction=='ingress')].sourceCidrIp contains 0.0.0.0/0""; show X;```","Alibaba Cloud ECS instance that has a public IP address and is attached to a security group with internet access This policy identifies ECS instances that have a public IP address and are attached to security groups with internet access. Because an ECS instance receives a public IP address at the launch, by default, as a best practice ensure that the instance is attached to a security group which is not overly permissive. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Instead of using a public IP address for the ECS instance, either associate an Elastic IP address to it or evaluate the rules for the security groups to ensure restricted access.\n\nTo allocate an Elastic IP address, follow the instructions below:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. Select the reported ECS instance\n4. Choose More > Network and Security Group > Convert to EIP\n5. On 'Convert to EIP' popup window, click on 'OK'\n\nTo restrict Security Groups allowing all traffic, follow the instructions below:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. Click on the reported ECS instance\n3. In the left-side navigation pane, choose Security Groups\n4. Check the rules of each security group by clicking on 'Add Rules' in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow and 'Authorization Object' as 0.0.0.0/0, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"Chao Copy of Critical - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-credential-report' AND json.rule = 'user equals """" and ( _DateTime.ageInDays(access_key_1_last_used_date) < 14 or _DateTime.ageInDays(access_key_2_last_used_date) < 14 or _DateTime.ageInDays(password_last_used) < 14 )'```","AWS root account activity detected in last 14 days This policy identifies if AWS root account activity was detected within the last 14 days. The AWS root account user is the primary administrative identity associated with an AWS account, providing complete access to all AWS services and resources. Since the root user has complete access to the account, adopting the principle of least privilege is important to lower the risk of unintentional disclosure of highly privileged credentials and inadvertent alterations. It's also advised to remove the root user access keys and restrict the use of the root user, refraining from using them for routine or administrative duties. It is recommended to restrict the use of the AWS root account. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: If any access keys are created for the root account, please delete the keys using the following steps:\n\n1. Sign in to AWS Console as the root user.\n2. Click the root account name and on the top right select 'Security Credentials' from the dropdown.\n3. For each key in 'Access Keys', click on 'X' to delete the keys.\n\nLimiting root user console access as much as feasible is advised.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.createvcn and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletevcn and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatevcn) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for VCN changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Virtual Cloud Networks (VCN) changes. Monitoring and alerting on changes to VCN will help in identifying changes to the security posture. It is recommended that a Event Rule and Notification be configured to catch changes made to Virtual Cloud Networks (VCN). NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting VCN – Create, VCN - Delete and VCN – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code equals active and type equals ""network"" and listeners[?any(protocol equals TLS and sslPolicy exists and sslPolicy does not contain ELBSecurityPolicy-TLS13-1-2-2021-06)] exists```","AWS Network Load Balancer (NLB) is not using the latest predefined security policy This policy identifies Network Load Balancers (NLBs) which are not using the latest predefined security policy. A security policy is a combination of protocols and ciphers. The protocol establishes a secure connection between a client and a server and ensures that all data passed between the client and your load balancer is private. A cipher is an encryption algorithm that uses encryption keys to create a coded message. So it is recommended to use the latest predefined security policy which uses only secured protocol and ciphers. For more details: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On the 'Listeners' tab, Choose the 'TLS' rule\n6. Click on 'Edit', Change 'Security policy' to 'ELBSecurityPolicy-TLS13-1-2-2021-06'\n7. Click on 'Update' to save your changes." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'deleteProtection equals off'```,"Alibaba Cloud SLB delete protection is disabled This policy identifies Server Load Balancers (SLB) for which delete protection is disabled. Enabling delete protection for these SLBs prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Select the reported ECS instance, select More > Manage\n4. In the Instance Details tab, Slide the 'Deletion Protection' button to green.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and backendSets.* is not empty and backendSets.*.sslConfiguration.certificateName is empty```,"OCI Load balancer backend set not configured with SSL certificate This policy identifies Load balancers for which the backend set is not configured with an SSL certificate. Without an SSL certificate, data transferred between the load balancer and backend servers is not encrypted, making it vulnerable to interception and attacks. Proper SSL configuration ensures data integrity and privacy, protecting sensitive information from unauthorized access. As a best practice, it is recommended to implement SSL between the load balancer and your backend servers so that traffic between the load balancer and the backend servers is encrypted. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SSL to your Load balancer backend set follow below URLs details:\nFor adding certificate - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingcertificates.htm#configuringSSLhandling\nFor editing backend set - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets.htm#UpdateBackendSet." ```config from cloud.resource where api.name = 'aws-ec2-describe-network-acls' AND json.rule = associations[*] size less than 1```,"AWS Network ACL is not in use This policy identifies AWS Network ACLs that are not in use. AWS Network Access Control Lists (NACLs) serve as a firewall mechanism to regulate traffic flow within and outside VPC subnets. A recommended practice is to assign NACLs to specific subnets to effectively manage network traffic. Unassigned NACLs with inadequate rules might inadvertently get linked to subnets, posing a security risk by potentially allowing unauthorized access. It is recommended to regularly review and remove unused and inadequate NACLs to improve security, network performance, and resource management. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To attach an AWS Network Access Control List (NACL) to a subnet, follow these steps: \n\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section. \n3. Select the NACL that you want to attach to a subnet. \n4. Choose the 'Actions' button, then select 'Edit subnet associations'. \n5. In the 'Edit subnet associations' dialogue box, select the subnet(s) that you want to associate with the NACL. \n6. Choose 'Save' to apply the changes. \n\nTo delete a non-default AWS Network Access Control List (NACL), follow these steps: \n\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section. \n3. Select the NACL that you want to delete. \n4. Choose the 'Actions' button, then select 'Delete network ACL'. \n5. Confirm the deletion when prompted.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' AND json.rule = 'name equals default'```,"GCP project is using the default network This policy identifies the projects which have default network configured. It is recommended to use network configuration based on your security and networking requirements, you should create your network and delete the default network. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1.Login to GCP Portal\n2.Goto VPC network (Left panel)\n3.Click on the reported default network\n4.Click on 'DELETE VPC NETWORK'\n5.Create a new VPC network according to your requirement\nMore info: https://cloud.google.com/vpc/docs/vpc#firewall_rules." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"BikramTest AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." ```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-bedrock-custom-model' as Y; filter ' $.Y.trainingDataConfig.bucketName equals $.X.bucketName'; show X;```,"AWS S3 bucket is utilized for AWS Bedrock Custom model training data This policy identifies the AWS S3 bucket utilized for AWS Bedrock Custom model training job data. S3 buckets store the datasets required for training Custom models in AWS Bedrock. Proper configuration and access control are essential to ensure the security and integrity of the training data. Improperly configured S3 buckets used for AWS Bedrock Custom model training data can lead to unauthorized access, data breaches, and potential loss of sensitive information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Bedrock Custom model training data and ensure compliance. NOTE: This policy is designed to identify the S3 buckets utilized for training custom models in AWS Bedrock. It does not signify any detected misconfiguration or security risk. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the AWS Bedrock Custom model training job data, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-public-access-block' AND json.rule = blockPublicAccessConfiguration.blockPublicSecurityGroupRules is false```,"AWS EMR Block public access setting disabled This policy identifies AWS EMR which has a disabled block public access setting. AWS EMR block public access prevents a cluster in a public subnet from launching when any security group associated with the cluster has a rule that allows inbound traffic from the internet, unless the port has been specified as an exception. It is recommended to enable AWS EMR Block public access in each AWS Region for your AWS account. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following URL to configure AWS EMR Block public access:\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-block-public-access.html." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals ""ACTIVE"" and metadata.notebook-upgrade-schedule does not exist```","GCP Vertex AI Workbench user-managed notebook auto-upgrade is disabled This policy identifies GCP Vertex AI Workbench user-managed notebooks that have auto-upgrade disabled. Auto-upgrading Google Cloud Vertex environments ensures timely security updates, bug fixes, and compatibility with APIs and libraries. It reduces security risks associated with outdated software, enhances stability, and enables access to new features and optimizations. It is recommended to enable auto-upgrade to minimize maintenance overhead and mitigate security risks. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Under 'Vertex AI', navigate to the 'Workbench' (Left Panel)\n3. Select 'USER-MANAGED NOTEBOOKS' tab\n4. Click on the reported notebook\n5. Go to 'SYSTEM' tab\n6. Enable 'Environment auto-upgrade'\n7. Configure upgrade schedule as required\n8. Click 'SUBMIT'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals ""RUNNABLE"" and deletionProtectionEnabled is false```","GCP SQL database instance deletion protection is disabled This policy identifies GCP SQL database instances that have deletion protection disabled. Enabling instance deletion protection on GCP SQL databases is crucial for preventing accidental data loss, especially in production environments where an unintended deletion could disrupt services and impact business continuity. Deletion protection adds an extra safeguard, requiring intentional action to disable the setting before deletion, helping teams avoid costly downtime and ensuring the availability of essential data. It is recommended to enable deletion protection on GCP SQL database instances to prevent accidental deletion. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'SQL' service\n3. Click on the name of the SQL instance on which alert is generated\n4. Click 'EDIT' at top\n5. Expand 'Data Protection'\n6. Check 'Enable deletion protection'\n7. Click 'Save' at bottom." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = wafWebAclArn does not exist```,"AWS AppSync not configured with AWS Web Application Firewall v2 (AWS WAFv2) This policy identifies AWS AppSync which is not configured with AWS Web Application Firewall. As a best practice, enable the AWS WAF service on AppSync to protect against application layer attacks. To block malicious requests to your AppSync, define the block criteria in the WAF web access control list (web ACL). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AppSync with AWS WAF, follow the below URL:\nhttps://docs.aws.amazon.com/appsync/latest/devguide/WAF-Integration.html." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.EnableInTransitEncryption is false)' ; show X;```,"AWS EMR cluster is not enabled with data encryption in transit This policy identifies AWS EMR clusters which are not enabled with data encryption in transit. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and storage server. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your EMR clusters and their associated storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1.Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Data in transit encryption', check the box 'Enable in-transit encryption'.\n8. From the dropdown of 'TLS certificate provider’ select the appropriate certificate provider type and follow below link to create them.\n Reference: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.httpsOnly equals false'```,"Azure App Service Web app doesn't redirect HTTP to HTTPS Azure Web Apps allows sites to run under both HTTP and HTTPS by default. Web apps can be accessed by anyone using non-secure HTTP links by default. Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. It is recommended to enforce HTTPS-only traffic. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under the Settings section, Click on 'Configuration'\n5. In 'General Settings', under 'Platform settings' Set 'HTTPS Only' to 'On'." "```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = '(eventSelectors[*].readWriteType contains All and eventSelectors[*].includeManagementEvents equal ignore case true) or (advancedEventSelectors[*].fieldSelectors[*].equals contains ""Management"" and advancedEventSelectors[*].fieldSelectors[*].field does not contain ""readOnly"" and advancedEventSelectors[*].fieldSelectors[*].field does not contain ""eventSource"")' as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1```","AWS CloudTrail is not enabled with multi trail and not capturing all management events This policy identifies the AWS accounts which do not have a CloudTrail with multi trail enabled and capturing all management events. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on CloudTrail across different regions to get a complete audit trail of activities across various services. NOTE: If you have Organization Trail enabled in your account, this policy can be disabled, or alerts generated for this policy on such an account can be ignored; as Organization Trail by default enables trail log for all accounts under that organization. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following link to create/update the trail:\nhttps://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html\n\nRefer to the following link for more info on logging management events:\nLogging management events - AWS CloudTrail." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.diskEncryptionMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud disk encryption monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have disk encryption monitoring set to disabled. Enabling disk encryption for virtual machines will secure the data by encrypting it. It is recommended to set disk encryption monitoring in Microsoft Defender for Cloud security policy. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='log_retention_days')].properties.value less than 4""```","Azure PostgreSQL database server log retention days is less than or equals to 3 days This policy identifies PostgreSQL database servers which have log retention days less than or equals to 3 days. Enabling log_retention_days helps PostgreSQL database server to Sets number of days a log file is retained which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. For 'log_retention_days', enter value in range 4-7 (inclusive) and click on 'Save' button.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","Patch 21.11.1 - RLP-83104 - Copy of Critical of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'nodeConfig.imageType does not exist or nodeConfig.imageType does not start with COS'```,"GCP Kubernetes Engine Clusters not using Container-Optimized OS for Node image This policy identifies Kubernetes Engine Clusters which do not have a container-optimized operating system for node image. Container-Optimized OS is an operating system image for your Compute Engine VMs that is optimized for running Docker containers. By using Container-Optimized OS for node image, you can bring up your Docker containers on Google Cloud Platform quickly, efficiently, and securely. The Container-Optimized OS node image is based on a recent version of the Linux kernel and is optimized to enhance node security. It is also regularly updated with features, security fixes, and patches. The Container-Optimized OS image provides better support, security, and stability than other images. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under Node Pools, For Node image click on 'Change'\n6. Choose 'Container-Optimized OS (cos)' \n7. Click on Change." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = releaseChannel.channel does not exist```,"GCP Kubernetes Engine cluster not using Release Channel for version management This policy identifies GCP Kubernetes Engine clusters that are not using Release Channel for version management. Subscribing to a specific release channel reduces version management complexity. The Regular release channel upgrades every few weeks and is for production users who need features not yet offered in the Stable channel. These versions have passed internal validation, but don't have enough historical data to guarantee their stability. Known issues generally have known workarounds. The Stable release channel upgrades every few months and is for production users who need stability above all else, and for whom frequent upgrades are too risky. These versions have passed internal validation and have been shown to be stable and reliable in production, based on the observed performance of those clusters. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. From the list of available clusters, select the reported cluster\n4. Go to the 'Release channel' configuration\n5. To edit, Click on the 'UPGRADE AVAILABLE' or 'Edit release channel'(Whichever available)\n6. In the 'Edit version' pop-up, select the required release channel(Regular Channel/ Stable Channel/ Rapid Channel) from the 'Release channel' dropdown\n7. Click on 'SAVE CHANGES' or 'CHANGE'.\n\nKnow more on Release Channels here: https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels." ```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-sql-instances-list' and json.rule = 'settings.userLabels[*] does not exist'```,"GCP SQL Instances without any Label information This policy identifies the SQL DB instance which does not have any Labels. Labels can be used for easy identification and searches. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console.\n2. On left Navigation, Click on SQL\n3. Select the reported SQL instance.\n4. Click on EDIT, Add labels with the appropriate Key:Value information.\n5. Click Save." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = 'networkRuleSet.bypass does not contain AzureServices'```,"Azure Storage Account 'Trusted Microsoft Services' access not enabled This policy identifies Storage Accounts which have 'Trusted Microsoft Services' access not enabled. Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. If the Allow trusted Microsoft services exception is enabled, the following services: Azure Backup, Azure Site Recovery, Azure DevTest Labs, Azure Event Grid, Azure Event Hubs, Azure Networking, Azure Monitor and Azure SQL Data Warehouse (when registered in the subscription), are granted access to the storage account. It is recommended to enable Trusted Microsoft Services on storage account instead of leveraging network rules. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to Storage Accounts dashboard\n3. Select the storage account you need to modify\n4. Under 'Security + networking' section, Click on 'Networking'\n5. Under 'Firewalls and virtual networks' tab, Ensure that 'Enabled from selected virtual networks and IP addresses' is selected.\n6. Under 'Exceptions', Make sure that 'Allow Azure services on the trusted services list to access this storage account' is checked.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = 'serverSideEncryptionConfiguration.applyServerSideEncryptionByDefault.ssealgorithm equals None'```,"Alibaba Cloud OSS bucket server-side encryption is disabled This policy identifies Object Storage Service (OSS) buckets which have server-side encryption disabled. As a best practice enable the server-side encryption to improve data security without making changes to your business or applications. OSS encrypts user data when writing the data into the hard disks deployed in the data center and automatically decrypts the data when it is accessed. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Server-side Encryption' Section, Click on 'Configure'\n5. For 'Bucket Encryption' field, Set either 'KMS' or 'AES256' encryption instead of 'None'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-file-system' AND json.rule = kmsKeyId is empty```,"OCI File Storage File Systems are not encrypted with a Customer Managed Key (CMK) This policy identifies the OCI File Storage File Systems that are not encrypted with a Customer Managed Key (CMK). It is recommended that File Storage File Systems should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the File System This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = (firewall does not exist or (firewall exists and _IPAddress.areAnyOutsideCIDRRange(firewall.allowed_ip,192.168.0.0/16,172.16.0.0/12,10.0.0.0/8) is true))```","IBM Cloud Object Storage bucket is not restricted to Private IP ranges This policy identifies IBM Cloud object storage buckets that are not restricted to private IP ranges or if the cloud object storage firewall is not configured. IBM Cloud Storage Firewall enables users to control access to their stored data by setting up firewall rules and restricting access to authorised IP addresses or ranges, thereby enhancing security and compliance with regulatory standards. Not restricting access via the IBM Cloud Storage Firewall to private IPs increases the risk of unauthorised data access, breaches, and potential compliance violations. It is recommended to add only private IPs to the list of authorised IPs / ranges in bucket firewall policies. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a list of authorised IP addresses or remove the public IP from the IBM cloud object storage,\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list'. From the list of resources, select the object storage instance in which the reported bucket resides\n3. Select the bucket to which you want to limit access to authorised IP addresses\n4. Select the 'Firewall (legacy)' dropdown under the 'Permissions' tab\n6. Click on 'Edit' and Click on 'Add' and specify a list of IP addresses from the IBM cloud private IP range in CIDR notation, for example.\n192.168.0.0/16, fe80:021b::0/64. Addresses can follow either IPv4 or IPv6 standards\n7. Click 'Add', or click on the public IP address presented in the IP address tab, and then click 'Delete'\n8. Click 'Save All' to enforce the firewall." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway-waf-policy' AND json.rule = properties.policySettings.state equals Enabled and properties.managedRules.managedRuleSets is not empty and properties.managedRules.managedRuleSets[*].ruleGroupOverrides[*].rules[?any(ruleId equals 944240 and state equals Disabled)] exists and properties.applicationGateways[*] is not empty```,"Azure Application Gateway Web application firewall (WAF) policy rule for Remote Command Execution is disabled This policy identifies Azure Application Gateway Web application firewall (WAF) policies that have the Remote Command Execution rule disabled. It is recommended to define the criteria in the WAF policy with the rule ‘Remote Command Execution (944240)’ under managed rules to help in detecting and mitigating Log4j vulnerability. For details: https://www.microsoft.com/security/blog/2021/12/11/guidance-for-preventing-detecting-and-hunting-for-cve-2021-44228-log4j-2-exploitation/ This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Web Application Firewall policies (WAF)'\n3. Click on the reported Web Application Firewall policies (WAF) policy\n4. Click on the 'Managed rules' from the left panel\n5. Search for '944240' in Managed rule sets and Select rule\n6. Click on the 'Enable' to enable rule\n7. Click on 'Save' to save your changes." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-vpn-gateways-summary' AND json.rule = 'TotalVPNGateways greater than 3'```,"AWS regions nearing VPC Private Gateway limit This policy identifies if your account is near the private gateway limitation per VPC per Region. AWS provides a reasonable starting limitation for the maximum number of Virtual private gateways you can assign in each VPC. If you approach the limit in a particular VPC, this alert indicates that you have nearly exhausted your allocation. NOTE: As per http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html Virtual private gateway per region limit is 5. This policy will trigger an alert if Virtual private gateway per region reached 80% (i.e. 4) of resource availability limit allocated. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Virtual Private Gateways' (Left Panel)\n5. Choose the Virtual Private Gateway you want to delete, which is not used or required\n6. Click on 'Actions' dropdown\n7. Click on 'Delete Virtual Private Gateway'NOTE: If Virtual Private Gateway is already in use it can not be deleted. Make sure gateways unassociated before going to deleting it.\n8. On 'Delete Virtual Private Gateway' popup dialog, Click on 'Yes, Delete'NOTE: If existing Virtual Private Gateways are properly associated and exhausted your VPC Virtual Private Gateway limit allocation, you can contact AWS for a service limit increase.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = 'lastRotatedDate exists and rotationEnabled is true and _DateTime.daysBetween($.lastRotatedDate,today()) > $.rotationRules.automaticallyAfterDays'```","AWS Secrets Manager secret configured with automatic rotation not rotated as scheduled This policy identifies the AWS Secrets Manager secret not rotated successfully based on the rotation schedule. Secrets Manager stores secrets centrally, encrypts them automatically, controls access, and rotates secrets safely. By rotating secrets, you replace long-term secrets with short-term ones, limiting the risk of unauthorized use. If secrets fail to rotate in Secrets Manager, long-term secrets remain in use, increasing the risk of unauthorized access and potential data breaches. It is recommended that proper configuration and monitoring of the rotation process be ensured to mitigate these risks. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: For help diagnosing and fixing common errors related to secrets rotation, refer to the URL:\n\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/troubleshoot_rotation.html." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( source equals ""0.0.0.0/0"" and direction equals ""inbound"" and action equals ""allow"" and ( (protocol equals ""tcp"" and (( destination_port_max greater than 3389 and destination_port_min less than 3389 ) or ( destination_port_max equals 3389 and destination_port_min equals 3389 ))) or protocol equals ""all"" ))] exists```","IBM Cloud VPC ACL allow ingress rule from 0.0.0.0/0 to RDP port This policy identifies IBM Cloud VPC Access Control List which are having ingress rule that allows traffic from 0.0.0.0/0 to RDP port. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. It is recommended to review VPC ACL rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access Control Lists'\n3. Select the 'Access Control Lists' reported in the alert\n4. Under 'Inbound rules'\n5. Click on three dots on the right corner of a row containing rule that has a port range value of ALL or a port range that includes port 3389 and has a Source of 0.0.0.0/0\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5500,5500)""```","Alibaba Cloud Security group allow internet traffic to VNC Listener port (5500) This policy identifies Security groups that allow inbound traffic on VNC Listener port (5500) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5500, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity.type does not contain UserAssigned```,"Azure Machine Learning workspace not configured with user-assigned managed identity This policy identifies Azure Machine Learning workspaces that are not configured with a user-assigned managed identity. By default, Azure Machine Learning workspaces use system-assigned managed identities to access resources like Azure Container Registry, Key Vault, Storage, and Application Insights. However, user-assigned managed identities offer better control over the identity's lifecycle and consistent access management across multiple resources. Since system-assigned identities are tied to the workspace and deleted if the workspace is removed, using a user-assigned identity allows access management independently, enhancing security and compliance. As a security best practice, it is recommended to configure the Azure Machine Learning workspace with a user-assigned managed identity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Method 1: Updating an Existing Workspace\n1. Once an Azure Machine Learning workspace is created with a System-Managed Identity, you cannot change it to use only a User-Assigned Managed Identity. You can update the workspace to use both System-Managed and User-Assigned Managed Identities.\n2. For detailed instructions on how to configure this, visit the following URL: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-identity-based-service-authentication?view=azureml-api-2&tabs=cli#add-a-user-assigned-managed-identity-to-a-workspace-in-addition-to-a-system-assigned-identity\n\nor\n\nMethod 2: Deleting the Existing Workspace and Creating a New Workspace\n1. To use only a User-Assigned Managed Identity, delete the existing workspace. \n2. Create a new Azure Machine Learning workspace. During the setup, select 'User Assigned Identity' under the 'Identity' tab to ensure it exclusively uses a User-Assigned Managed Identity from the start.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-credential-report' AND json.rule = 'user equals """" and mfa_active is false and arn does not contain gov:'```","AWS MFA is not enabled on Root account This policy identifies root account which has MFA enabled. Root accounts have privileged access to all AWS services. Without MFA, if the root credentials are compromised, unauthorized users will get full access to your account. NOTE: This policy does not apply to AWS GovCloud Accounts. As you cannot enable an MFA device for AWS GovCloud (US) account root user. For more details refer: https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-console.html This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Sign in to the 'AWS Console' using Root credentials.\n2. Navigate to the 'IAM' service.\n3. On the dashboard, click on 'Activate MFA on your root account', click on 'Manage MFA' and follow the steps to configure MFA for the root account.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(IPProtocol equals ""all"")] exists```","GCP Firewall with Inbound rule overly permissive to All Traffic This policy identifies GCP Firewall rules which allows inbound traffic on all protocols from public internet. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed need to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to VPC Network\n3. Go to the Firewall rules\n4. Click on the reported Firewall rule\n5. Click Edit\n6. Modify Source IP ranges to specific IP and modify Protocols and ports to specific protocol and port\n7. Click Save." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case ""Running"" AND kind contains ""functionapp"" AND kind does not contain ""workflowapp"" AND kind does not equal ""app"" AND properties.clientCertEnabled is false```","Azure Function App client certificate is disabled This policy identifies Azure Function App which are not set with client certificate. Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Incoming client certificates', Set 'Client certificate mode' to Require\n6. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\nAzure CLI Command - \""az functionapp update --set clientCertEnabled=true --name MyFunctionApp --resource-group MyResourceGroup\""." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals StorageAccounts and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for Storage This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Storage is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Storage. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Storage' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = ['Extensions'].['Microsoft.PowerShell.DSC'].['settings'].['properties'].['hostPoolName'] exists and powerState contains running as X; config from cloud.resource where api.name = 'azure-disk-list' AND json.rule = provisioningState equal ignore case Succeeded and (encryption.type does not contain ""EncryptionAtRestWithCustomerKey"" or encryption.diskEncryptionSetId does not exist) as Y; filter ' $.X.id equal ignore case $.Y.managedBy '; show Y;```","Azure Virtual Desktop disk encryption not configured with Customer Managed Key (CMK) This policy identifies Azure Virtual Desktop environments where disk encryption is not configured using a Customer Managed Key (CMK). Disk encryption is crucial for protecting data in Azure Virtual Desktop environments. By default, disks may be encrypted with Microsoft-managed keys, which might not meet specific security requirements. Using Customer Managed Keys (CMKs) offers better control over encryption, allowing organizations to manage key rotation, access, and revocation, thereby enhancing data security and compliance. As a best practice, it is recommended to configure disk encryption for Azure Virtual Desktop with a Customer Managed Key (CMK). This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To enable disk encryption on any disks attached to a VM, you must first stop the VM.\n\n1. Log in to Azure Portal and search for 'Disks'.\n2. Select 'Disks'.\n4. Select the reported disk.\n5. Under 'Settings' select 'Encryption'.\n6. For 'Key management', select 'Customer-managed key' from drop-down list.\n6. For the disk encryption set, select an existing one. If none are available, create a new disk encryption set.\n7. Click on 'Save'.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size greater than 0 and volume_attachments[*].type equals data and encryption equal ignore case provider_managed```,"IBM Cloud data disk is not encrypted with customer managed key This policy identifies IBM Cloud data storage volumes attached to a virtual server instance which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A storage volume can be encrypted with customer managed keys only at the time of creation. Please\ncreate a snapshot following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a storage volume from the above created snapshot with customer managed encryption:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instance'\n3. From the list, click on the name of an instance. The instance must be in a Running state.\n4. On the Instance details page, scroll to the list of Storage volumes and click 'Attach'.\n A side panel opens for you to define the volume attachment.\n5. From the Attach data volume panel, expand the list of Block volumes and select 'Create a data volume'.\n6. Select 'Import from snapshot'. Expand the Snapshot list and select a snapshot.\n7. Optionally, increase the size of the volume within the specified range.\n8. Under 'Encryption' section, select either 'Key protect' or 'Hyper Protect Crypto Services'.\n9. Under 'Encryption service instance' and 'Key name', select the instance and key to be used for encryption.\n10.Click Save. The side panel closes and messages indicate that the restored volume is being attached to the instance.\n\nPlease delete reported data disk following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' and json.rule = osType exists and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of (""EncryptionAtRestWithCustomerKey"",""EncryptionAtRestWithPlatformAndCustomerKeys"",""EncryptionAtRestWithPlatformKey"")```","Azure VM OS disk is not configured with any encryption This policy identifies VM OS disks that are not configured with any encryption. Azure encrypts OS disks that are not configured with any encryption. Azure offers Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK] by default for managed disks. It is recommended to enable default encryption or you may optionally choose to use a customer-managed key to protect from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Disks'\n3. Select the reported OS disk you want to modify\n4. Select 'Encryption' under 'Settings'\n5. Select 'Encryption Type' according to your encryption requirement.\n6. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND cloud.account = 'jScheel AWS Account' AND api.name = 'aws-ec2-describe-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[?any( toPort equals 51820 and ipRanges[*] contains ""0/0"" )] exists as Y; config from cloud.resource where api.name = 'aws-ec2-describe-route-tables' AND json.rule = routes[?any( state equals active and gatewayId contains ""igw"" and destinationCidrBlock contains ""0/0"" )] exists as Z; filter ' $.X.securityGroups[*].groupId == $.Y.groupId and $.X.subnetId == $.Z.associations[*].subnetId'; show Z;```","jScheel Wireguard instance allows ANY toPort on 51820 Wireguard instance allows ANY toPort on 51820 This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-describe-mount-targets' AND json.rule = 'fileSystemDescription.encrypted is false'```,"AWS Elastic File System (EFS) with encryption for data at rest is disabled This policy identifies Elastic File Systems (EFSs) for which encryption for data at rest is disabled. It is highly recommended to implement at-rest encryption in order to prevent unauthorized users from reading sensitive data saved to EFSs. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS EFS Encryption of data at rest can only be enabled during file system creation. So to resolve this alert, create a new EFS with encryption enabled, then migrate all required file data from the reported EFS to this newly created EFS and delete reported EFS.\n\nTo create a new EFS with encryption enabled, perform the following:\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Click on the 'Create file system' button\n6. On the 'Create file system' pop-up window, \n7. Click on 'Customize' button to replicate the configurations of alerted file system as required\n8. Ensure 'Enable encryption of data at rest' is selected\n9. On the 'Review and create' step, Review all your setting and click on the 'Create' button\n\nTo delete reported EFS which does not has encryption, perform the following:\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Select the reported file system\n6. Click on 'Delete' button\n7. In the 'Delete file system' popup box, To confirm the deletion enter the file system's ID and Click on 'Confirm'." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy buecs This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateusercapabilities and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateuserstate) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for user changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM User changes. Monitoring and alerting on changes to IAM User will help in identifying changes to the security posture. It is recommended that a Event Rule and Notification be configured to catch changes made to Identity and Access Management (IAM) Users. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting: User – Create, User – Delete, User – Update, User Capabilities – Update, User State – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = skuName contains ""Classic""```","Azure Container Registry using the deprecated classic registry This policy identifies an Azure Container Registry (ACR) that is using the classic SKU. The initial release of the Azure Container Registry (ACR) service that was offered as a classic SKU is being deprecated and will be unavailable after April 2019. As a best practice, upgrade your existing classic registry to a managed registry. For more information, visit https://docs.microsoft.com/en-us/azure/container-registry/container-registry-upgrade This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Container Registries'\n3. Select the container registry you need to modify.\n4. Select 'Upgrade to managed registry'.\n5. Select 'OK' to confirm the upgrade.." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or passwordReusePrevention equals null or passwordReusePrevention !isType Integer or passwordReusePrevention < 1'```,"AWS IAM password policy allows password reuse This policy identifies IAM policies which allow password reuse . AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Click on 'Account Settings', check 'Prevent password reuse'.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( source equals ""0.0.0.0/0"" and direction equals ""inbound"" and action equals ""allow"" and ( (protocol equals ""tcp"" and (( destination_port_max greater than 22 and destination_port_min less than 22 ) or ( destination_port_max equals 22 and destination_port_min equals 22 ))) or protocol equals ""all"" ))] exists```","IBM Cloud VPC ACL allow ingress rule from 0.0.0.0/0 to SSH port This policy identifies IBM Cloud VPC Access Control List which are having ingress rule that allows traffic from 0.0.0.0/0 to SSH port. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. It is recommended to review VPC ACL rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access Control Lists'\n3. Select the 'Access Control Lists' reported in the alert\n4. Under 'Inbound rules'\n5. Click on three dots on the right corner of a row containing rule that has a port range value of ALL or a port range that includes port 22 and has a Source of 0.0.0.0/0\n6. Click on 'Delete'." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-policy' AND json.rule = lifecycleState equals ACTIVE and (statements[*] contains ""to manage all-resources in tenancy"" or statements[*] contains ""to manage all-resources IN TENANCY"") and name does not contain ""Tenant Admin Policy""```","OCI IAM policy with full administrative privileges across the tenancy to non Administrator This policy identifies IAM policies with full administrative privileges across the tenancy to non Administrators. IAM policies are the means by which privileges are granted to users, groups, or services. It is recommended to practice the Principle of least privilege, which limits users' access rights to strictly required to do their jobs. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Identity -> Policies\n3. In the compartment dropdown, choose the root compartment. Open the reported policy to view the policy statement.\n4. Remove policy statement that allows any group other than Administrators or any service access to manage all resources in the tenancy.." ```config from cloud.resource where cloud.type='azure' and api.name= 'azure-container-registry' as X; config from cloud.resource where api.name = 'azure-resource-group' as Y; filter ' $.X.resourceGroupName equals $.Y.name and $.Y.isDedicatedContainerRegistryGroup is false' ; show X;```,"Azure Container Registry does not use a dedicated resource group Placing your Azure Container Registry (ACR) in a dedicated Azure resource group, allows you to minimize the risk of accidentally deleting the collection of images in the registry when you delete the container instance resource group. This policy identifies ACRs that reside in resource groups that contains non-ACR resources. For more information about ACR best practices, visit https://docs.microsoft.com/en-us/azure/container-registry/container-registry-best-practices This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate this alert, move all non-ACR resources to another resource group. To move resources to another resource group follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and config.httpLoggingEnabled exists and config.httpLoggingEnabled is false```,"Azure App service HTTP logging is disabled This policy identifies Azure App services that have HTTP logging disabled. By enabling HTTP logging for your app service, you can collect log information and use it to monitor and troubleshoot your app, as well as identify any potential security issues or threats. This can help to ensure that your app is running smoothly and is secure from potential attacks. As best practice, it is recommended to enable HTTP logging on your app service. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to App Services dashboard\n3. Click on the reported App service\n4. Under the 'Monitoring' menu, click on 'App Service logs'\n5. Under 'Web server logging', select Storage to store logs on blob storage, or File System to store logs on the App Service file system.\n6. In Retention Period (Days), set the number of days the logs should be retained.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-defender-for-cloud-security-contact' AND json.rule = properties.alertNotifications.state does not equal ignore case ON and properties.alertNotifications.minimalSeverity equal ignore case High```,"Azure 'Notify about alerts with the following severity' is Set to 'High' This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'encrypted is false'```,"Alibaba Cloud disk encryption is disabled This policy identifies disks for which encryption is disabled. As a best practice enable disk encryption to improve data security without making changes to your business or applications. Snapshots created from encrypted disks and new disks created from these snapshots are automatically encrypted. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Alibaba Cloud disk can only be encrypted at the time of disk creation. So to resolve this alert, create a new disk with encryption and then migrate all required disk data from the reported disk to this newly created disk.\n\nTo create an Alibaba Cloud disk with encryption:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Click on 'Create Disk'\n5. Check the 'Disk Encryption' box in the 'Disk' section\n6. Click on 'Preview Order' make sure parameters are chosen correctly\n7. Click on 'Create', After you create a disk, attach that disk to other resources per your requirements.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-virtual-machine-scale-set' AND json.rule = properties.virtualMachineProfile.diagnosticsProfile.bootDiagnostics.enabled is false```,"Azure Virtual Machine scale sets Boot Diagnostics Disabled This policy identifies Azure Virtual Machines scale sets which has Boot Diagnostics setting Disabled. Boot Diagnostics when enabled for virtual machine, captures Screenshot and Console Output during virtual machine startup. This would help in troubleshooting virtual machine when it enters a non-bootable state. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'All services' from the left pane\n3. Go to 'Compute' under 'Categories'\n4. Select 'Virtual Machine scale sets'\n5. Select the reported virtual machine scale sets\n6. Click on 'Boot Diagnostics' under 'Support + troubleshooting'\n7. Select 'On'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = (sku.tier equals GeneralPurpose or sku.tier equals MemoryOptimized) and properties.userVisibleState equals Ready and properties.infrastructureEncryption equals Disabled```,"Azure PostgreSQL database server Infrastructure double encryption is disabled This policy identifies PostgreSQL database servers in which Infrastructure double encryption is disabled. Infrastructure double encryption adds a second layer of encryption using service-managed keys. It is recommended to enable infrastructure double encryption on PostgreSQL database servers so that encryption can be implemented at the layer closest to the storage device or network wires. For more details: https://docs.microsoft.com/en-us/azure/postgresql/concepts-infrastructure-double-encryption This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Configuring Infrastructure double encryption for Azure Database for PostgreSQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption.\n\nTo create an Azure Database for PostgreSQL server with Infrastructure double encryption, follow below URL:\nhttps://docs.microsoft.com/en-us/azure/postgresql/howto-double-encryption\n\nNOTE: Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case FileIntegrityMonitoring AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)```,"Azure Microsoft Defender for Cloud set to Off for File Integrity Monitoring This policy identifies Azure Microsoft Defender for Cloud where the File Integrity Monitoring is set to Off. File Integrity Monitoring tracks critical system files in Windows and Linux for unauthorized changes, helping to identify potential attacks. Disabling File Integrity Monitoring leaves your system vulnerable to unnoticed alterations, increasing the risk of data breaches or system failures. Enabling FIM enhances security by alerting you to suspicious changes, allowing for proactive threat detection and prevention of unauthorized modifications to system files. As a security best practice, it is recommended to enable File Integrity Monitoring in Azure Microsoft Defender for Cloud. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'File Integrity Monitoring' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains AuthorizeSecurityGroupIngress and $.X.filterPattern contains AuthorizeSecurityGroupEgress and $.X.filterPattern contains RevokeSecurityGroupIngress and $.X.filterPattern contains RevokeSecurityGroupEgress and $.X.filterPattern contains CreateSecurityGroup and $.X.filterPattern contains DeleteSecurityGroup) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for AWS Security group changes This policy identifies the AWS regions that do not have a log metric filter and alarm for security group changes. Security groups act as virtual firewalls that control inbound and outbound traffic to AWS resources. If changes to these groups go unmonitored, it could result in unauthorized access or expose sensitive data to the public internet. It is recommended to create a metric filter and alarm for security group changes to promptly detect and respond to any unauthorized modifications, thereby maintaining the integrity and security of your AWS environment. NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi-trail enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n\n{ ($.eventName = AuthorizeSecurityGroupIngress) ||\n($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName =\nRevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) ||\n($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\n\nand Click on 'NEXT'.\n\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review the details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html." ```config from cloud.resource where api.name = 'azure-frontdoor-waf-policy' AND json.rule = properties.policySettings.enabledState equals Enabled and properties.managedRules.managedRuleSets is not empty and properties.managedRules.managedRuleSets[*].ruleGroupOverrides[*].rules[?any(action equals Block and ruleId equals 944240 and enabledState equals Disabled)] exists as X; config from cloud.resource where api.name = 'azure-frontdoor' AND json.rule = properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink exists and properties.provisioningState equals Succeeded as Y; filter '$.Y.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id contains $.X.name'; show X;```,"Azure Front Door Web application firewall (WAF) policy rule for Remote Command Execution is disabled This policy identifies Azure Front Door Web application firewall (WAF) policies that have the Remote Command Execution rule disabled. It is recommended to define the criteria in the WAF policy with the rule ‘Remote Command Execution (944240)’ under managed rules to help in detecting and mitigating Log4j vulnerability. For details: https://www.microsoft.com/security/blog/2021/12/11/guidance-for-preventing-detecting-and-hunting-for-cve-2021-44228-log4j-2-exploitation/ This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Web Application Firewall policies (WAF)'\n3. Click on the reported Web Application Firewall policies (WAF) policy\n4. Click on the 'Managed rules' from the left panel\n5. Search '944240' rule from search bar and Select rule\n6. Click on the 'Enable' to enable rule." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.Owner))] exists```,"bobby Copy of AWS SNS topic with cross-account access This policy identifies AWS SNS topics that are configured with cross-account access. Allowing unknown cross-account access to your SNS topics will enable other accounts and gain control over your AWS SNS topics. To prevent unknown cross-account access, allow only trusted entities to access your Amazon SNS topics by implementing the appropriate SNS policies. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. In the Access Policy section, verify all ARN values in 'Principal' elements are from trusted entities; If not remove those ARN from the entry.\n9. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'```,"Azure App Service Web app doesn't use latest Java version This policy identifies Azure web apps that don't use the latest Java version. Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes if any. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure console\n2. Go to App Services\n3. Click on the reported App\n4. Under Settings section, Click on Configuration\n5. Select General settings\n6. In Stack settings section, ensure that Stack is set with the latest Java version.\n7. Click on Save." ```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-blob-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;```,"Azure Storage logging is not Enabled for Blob Service for Read Write and Delete requests This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = status contains In_use and enableAutomatedSnapshotPolicy is false```,"Alibaba Cloud disk automatic snapshot policy is disabled This policy identifies disks which have automatic snapshot policy disabled. As a best practice, enable automatic snapshot policy to prevent irreversible data loss from accidental or malicious operations. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To apply an automatic snapshot policy on the reported disk follow below URL:\nhttps://www.alibabacloud.com/help/doc-detail/25457.htm." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_multi_cloud_child_policies_ss_finding_1 Description-d6a7725e-0ded-439f-b5cb-740eaf1df571 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where resource.status = Active AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState exists```,"Copy of OCI Hosts test - Ali This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of (""allAuthenticatedUsers"",""allUsers""))] exists```","GCP Cloud Function is publicly accessible This policy identifies GCP Cloud Functions that are publicly accessible. Allowing 'allusers' / 'allAuthenticatedUsers' to cloud functions can lead to unauthorised invocations of the functions or unwanted access to sensitive information. It is recommended to follow least privileged access policy and grant access restrictively. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to GCP console\n2. Navigate to service 'Cloud Functions'\n3. Click on the function on which the alert is generated\n4. Go to tab 'PERMISSIONS'\n5. Review the roles to see if 'allusers'/'allAuthenticatedUsers' is present\n6. Click on the delete icon to revoke access from 'allusers'/'allAuthenticatedUsers'\n7. On Pop-up select the check box to confirm \n8. Click on 'REMOVE'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equal ignore case Arm and properties.pricingTier does not equal ignore case Standard)] exists```,"Azure Microsoft Defender for Cloud set to Off for Resource Manager This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Resource Manager (ARM) set to Off. Enabling Azure Defender for ARM provides protection against issues like Suspicious resource management operations, Use of exploitation toolkits, Lateral movement from the Azure management layer to the Azure resources data plane. It is highly recommended to enable Azure Defender for ARM. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Expand 'Select Defender plan' \n7. Select 'On' status for 'Resource Manager' \n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationPortRange contains _Port.inRange(22,22) or destinationPortRanges[*] contains _Port.inRange(22,22) ))] exists```","Azure Network Security Group allows all traffic on SSH port 22 This policy identifies Network security groups (NSG) that allow all traffic on SSH port 22. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.emails[*] is empty```,"Azure SQL Server ADS Vulnerability Assessment 'Send scan reports to' is not configured This policy identifies Azure SQL Server which has ADS Vulnerability Assessment 'Send scan reports to' not configured. This setting enables ADS - VA scan reports being sent to email ids that are configured at 'Send scan reports to' field. It is recommended to update 'Send scan reports to' with email ids which would help in reducing time required for identifying risks and taking corrective measures. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. Specify one or more email ids to 'Send scan reports to' under 'VULNERABILITY ASSESSMENT SETTINGS'\n6. 'Save' your changes." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy ojnou This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-ecs-service' AND json.rule = launchType equals EC2 as X; config from cloud.resource where api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and registeredContainerInstancesCount equals 0 as Y; filter '$.X.clusterArn equals $.Y.clusterArn'; show Y;```,"AWS ECS cluster not configured with a registered instance This policy identifies ECS clusters that are not configured with a registered instance. ECS container instance is an Amazon EC2 instance that is running the Amazon ECS container agent and has been registered into an Amazon ECS cluster. It is recommended to remove Idle ECS clusters to reduce the container attack surface or register a new instance for the reported ECS cluster. For details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete the reported idle ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/delete_cluster.html\n\nTo register a new instance for reported ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy poumk This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-ec2-ebs-encryption' AND json.rule = ebsEncryptionByDefault is false as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus does not equal not-opted-in as Y; filter '$.X.region equals $.Y.regionName'; show X;```,"AWS EBS volume region with encryption is disabled This policy identifies AWS regions in which new EBS volumes are getting created without any encryption. Encrypting data at rest reduces unintentional exposure of data stored in EBS volumes. It is recommended to configure EBS volume at the regional level so that every new EBS volume created in that region will be enabled with encryption by using a provided encryption key. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption at region level by default, follow below URL:\n https://docs.aws.amazon.com/ebs/latest/userguide/work-with-ebs-encr.html#encryption-by-default\n\n Additional Information: \n\n To detect existing EBS volumes that are not encrypted ; refer Saved Search:\n AWS EBS volumes are not encrypted_RL\n\n To detect existing EBS volumes that are not encrypted with CMK, refer Saved Search:\n AWS EBS volume not encrypted using Customer Managed Key_RL." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nic-list' AND json.rule = ['properties.virtualMachine'].id is not empty and ['properties.enableIPForwarding'] exists and ['properties.enableIPForwarding'] is true```,"Azure Virtual machine NIC has IP forwarding enabled This policy identifies Azure Virtual machine NIC which have IP forwarding enabled. IP forwarding on a virtual machine's NIC allows the machine to receive and forward traffic addressed to other destinations. As a best practice, before you enable IP forwarding in a Virtual Machine NIC, review the configuration with your network security team to ensure that it does not allow an attacker to exploit the set up to route packets through the host and compromise your network. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1.Login to Azure Portal\n2.Click on 'All services' on left Navigation\n3.Click on 'Network interfaces' under 'Networking'\n4.Click on reported resource\n5.Click on 'IP configurations' under Settings\n6.Select 'Disabled' for 'IP forwarding'\n7.Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-definition' AND json.rule = properties.type equals ""CustomRole"" and properties.assignableScopes[*] contains ""/"" and properties.permissions[*].actions[*] starts with ""*""```","Azure subscriptions with custom roles are overly permissive This policy identifies azure subscriptions with custom roles are overly permissive. Least privilege access rule should be followed and only necessary privileges should be assigned instead of allowing full administrative access. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Check the usage of the role identified. Verify impact caused by Updating/deleting the role. Then follow below URL for updating or deleting custom role:\nhttps://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles-portal#update-a-custom-role." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-action-trail' AND json.rule = 'status equals Disable and isLogging is false'```,"Alibaba Cloud ActionTrail logging is disabled This policy identifies ActionTrails which have logging disabled. As a best security practice, it is recommended to enable logging, as ActionTrail logs can be used in scenarios as security analysis, resource change tracking, and compliance audit. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to ActionTrail\n3. In the left navigation pane, click on 'Trail List'\n4. Click on reported trail\n5. In the upper right corner of the configuration page, move the slider to the right to start logging for the trail.\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = externalIdentifier contains null and (email does not exist or emailVerified is false)```,"OCI IAM local (non-federated) user account does not have a valid and current email address This policy identifies the OCI Iam local (non-federated) users that do not have valid and current email address configured. It is recommended that OCI Iam local (non-federated) users are configured with valid and current email address to tie the account to identity in your organization. It also allows that user to reset their password if it is forgotten or lost. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login into OCI Console\n2. Select Identity from Services menu\n3. Select Users from Identity menu\n4. Click on the local (non-federated) user reported in the alert\n5. Click on Edit User\n6. Enter a valid and current email address in the EMAIL text box\n7. Click Save Changes." ```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = logging.logBucket equals $.name```,"GCP storage bucket is logging to itself This policy identifies GCP storage buckets that are sending logs to themselves. When storage buckets use the same bucket to send their access logs, a loop of logs will be created, which is not a security best practice. It is recommended to spin up new and different log buckets for storage bucket logging. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To resolve the alert, a new bucket should be created or an existing bucket other than the alerting bucket itself should be set for logging by following steps in the below-mentioned link.\n\nhttps://cloud.google.com/storage/docs/access-logs#delivery." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nsg-list' AND json.rule = flowLogsSettings does not exist or flowLogsSettings.enabled is false```,"Azure Network Watcher Network Security Group (NSG) flow logs are disabled This policy identifies Azure Network Security Groups (NSG) for which flow logs are disabled. To perform this check, enable this action on the Azure Service Principal: 'Microsoft.Network/networkWatchers/queryFlowLogStatus/action'. NSG flow logs, a feature of the Network Watcher app, enable you to view information about ingress and egress IP traffic through an NSG. The flow logs include information such as: - Outbound and inbound flows on a per-rule basis. - Network interface to which the flow applies. - 5-tuple information about the flow (source/destination IP, source/destination port, protocol). - Whether the traffic was allowed or denied. As a best practice, enable NSG flow logs to improve network visibility. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Network Watcher Network Security Group (NSG) flow log, follow below URL:\nhttps://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-portal#enable-nsg-flow-log." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" AND shieldedInstanceConfig.enableIntegrityMonitoring is false```","GCP Vertex AI Workbench Instance has Integrity monitoring disabled This policy identifies GCP Vertex AI Workbench Instances that have Integrity monitoring disabled. Integrity Monitoring continuously monitors the boot integrity, kernel integrity, and persistent data integrity of the underlying VM of the shielded workbench instances. It detects unauthorized modifications or tampering, enhancing security by verifying the trusted state of VM components throughout their lifecycle. Integrity monitoring provides active alerts, enabling administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. It is recommended to enable Integrity Monitoring for Workbench instances to detect and mitigate advanced threat, such as rootkits and bootkit malware. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Enable 'Turn on Integrity Monitoring'\n11. Click on 'Save'\n12. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.supportsHttpsTrafficOnly !exists```,"VenuTestPolicyRem This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-method' AND json.rule = requestValidatorId does not exist ```,"AWS API gateway request parameter is not validated This policy identifies the AWS API gateways for which the request parameters are not validated. When the validation fails, API Gateway fails the request, returns a 400 error response to the caller, and publishes the validation results in CloudWatch Logs. It is recommended to perform basic validation of an API request before proceeding with the integration request to block unvalidated calls to the backend. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS management console\n2. Navigate to 'API Gateway' service\n3. Select the region for which the API gateway is reported.\n4. Find the alerted API by the API gateway ID which is the first part of reported resource and click on it\n5. Navigate to the reported method\n6. Click on the clickable link of 'Method Request'\n7. Under section ‘Settings' ,click on the pencil symbol for 'Request Validator' field\n8. From the dropdown, Select the type of Request Validator as per the requirement\n9. Click on the tick symbol next to it to save the changes\n." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = 'autoProvisioningSettings[*].name equals default and (autoProvisioningSettings[*].properties.autoProvision equals Off or autoProvisioningSettings[*] does not exist)'```,"Azure Microsoft Defender for Cloud automatic provisioning of log Analytics agent for Azure VMs is set to Off This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has automatic provisioning of log Analytics agent for Azure VMs is set to Off. Microsoft Defender for Cloud provisions the Microsoft Monitoring Agent on all existing supported Azure virtual machines and any new ones that are created. The Microsoft Monitoring Agent scans for various security-related configurations and events such as system updates, OS vulnerabilities, endpoint protection, and provides alerts. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud' dashboard\n3. Select 'Environment Settings'\n4. Click on the reported subscription name\n5. Select the 'Settings & monitoring'\n6. Set Status 'On' for 'Log Analytics agent/Azure Monitor agent' component\n7. Click on 'Continue'\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case ""Ready"" and (['sqlServer'].['properties.minimalTlsVersion'] equal ignore case ""None"" or ['sqlServer'].['properties.minimalTlsVersion'] equals ""1.0"" or ['sqlServer'].['properties.minimalTlsVersion'] equals ""1.1"")```","Azure SQL server using insecure TLS version This policy identifies Azure SQL servers which use insecure TLS version. Enforcing TLS connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. As a security best practice, it is recommended to use the latest TLS version for Azure SQL server. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'SQL servers'\n3. Click on the reported SQL server instance you wanted to modify\n4. Navigate to Security -> Networking -> Connectivity\n5. Under 'Encryption in transit' section, Set 'Minimum TLS Version' to 'TLS 1.2' or higher.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.enabled is false```,"AWS KMS Customer Managed Key (CMK) is disabled This policy identifies the AWS KMS Customer Managed Key (CMK) that is disabled. Ensuring that your Amazon Key Management Service (AWS KMS) key is enabled is important because it determines whether the key can be used to perform cryptographic operations. If an AWS KMS Key is disabled, any operations dependent on that key, such as encryption or decryption of data, will fail. This can lead to application downtime, data access issues, and potential data loss if not addressed promptly. It is recommended to enable the AWS KMS Customer Managed Key (CMK) if it is used in the application, to restore cryptographic operations and ensure your applications and services can access encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS KMS customer managed keys.\n\n1. Sign in to the AWS Management Console and open the AWS Key Management Service (AWS KMS) console at https://console.aws.amazon.com/kms.\n2. To change the AWS Region that the reported resource is presented in, use the Region selector in the upper-right corner of the page.\n3. In the navigation pane, choose 'Customer-managed keys'.\n4. Select the reported CMK and click on the dropdown 'Key Actions'.\n5. Choose the 'Enable' option.." ```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```,"Edited_pwdzvysgyp_ui_auto_policies_tests_name kjbqahijfa_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter '($.Y.s3BucketName==$.X.bucketName) and ($.X.versioningConfiguration.mfaDeleteEnabled does not exist)'; show X;```,"AWS CloudTrail S3 buckets have not enabled MFA Delete This policy identifies the S3 buckets which do not have Multi-Factor Authentication enabled for CloudTrails. For encryption of log files, CloudTrail defaults to use of S3 server-side encryption (SSE). We recommend adding an additional layer of security by adding MFA Delete to your S3 bucket. This will help to prevent deletion of CloudTrail logs without your explicit authorization. We also encourage you to use a bucket policy that places restrictions on which of your identity access management (IAM) users are allowed to delete S3 objects. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: Enable MFA Delete on the bucket(s) you have configured to receive CloudTrail log files.\nNote: We recommend that you do not configure CloudTrail to write into an S3 bucket that resides in a different AWS account.\nAdditional information on how to do this can be found here:\n http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete." ```config from cloud.resource where api.name = 'azure-synapse-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-synapse-workspace-managed-sql-server-vulnerability-assessments' AND json.rule = properties.recurringScans.isEnabled is false as Y; filter '$.X.name equals $.Y.workspaceName'; show X;```,"Azure Synapse Workspace vulnerability assessment is disabled This policy identifies Azure Synpase workspace which has Vulnerability Assessment setting disabled. Vulnerability Assessment service scans Synapse workspaces for known security vulnerabilities and highlight deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data. It is recommended to enable Vulnerability assessment on Synapse workspaces. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure vulnerability assessment for your existing Azure Synapse workspace, follow below steps:\n\n1. Log in to Azure Portal and Navigate to Azure Synpase Analytics dashboard\n2. Select the reported Synapse Workspace\n3. Under Security, select Microsoft Defender for Cloud\n4. Enable Defender for Cloud to configure vulnerability assessment for the selected Azure Synapse Workspace.\n5 To configure vulnerability assessments to automatically run periodic scans, set Periodic recurring scans to On.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kusto-clusters' AND json.rule = properties.state equal ignore case Running and properties.enableDoubleEncryption is false```,"Azure Data Explorer cluster double encryption is disabled This policy identifies Azure Data Explorer clusters in which double encryption is disabled. Double encryption adds a second layer of encryption using service-managed keys. It is recommended to enable infrastructure double encryption on Data Explorer clusters so that encryption can be implemented at the layer closest to the storage device or network wires. For more details: https://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-double This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enabling double encryption is only possible during cluster creation. Once infrastructure encryption is enabled on your cluster, you can't disable it.\n\nTo create Azure Data Explorer cluster with double encryption, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-double\n\nNOTE: Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains ""roles/viewer"" or roles[*] contains ""roles/editor"" or roles[*] contains ""roles/owner"" as X; config from cloud.resource where api.name = 'gcloud-cloud-function-v2' as Y; filter '$.Y.serviceConfig.serviceAccountEmail equals $.X.user'; show Y;```","GCP Cloud Function is granted a basic role This policy identifies GCP Cloud Functions that are granted a basic role. This includes both Cloud Functions v1 and Cloud Functions v2. Basic roles are highly permissive roles that existed before the introduction of IAM and grant wide access over project to the grantee. The use of basic roles for granting permissions increases the blast radius and could help to escalate privilege further in case the Cloud Function is compromised. Following the principle of least privilege, it is recommended to avoid the use of basic roles. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: It is recommended to follow the principle of least privilege for granting access.\n\nTo update privileges granted to a service account, please refer to the steps below: \n1. Log in to the GCP console\n2. Navigate to the Cloud Functions\n3. Click on the cloud function for which alert is generated\n4. Go to 'DETAILS' tab\n5. Note the service account mentioned attached to the cloud function\n6. Navigate to the IAM & ADMIN\n7. Go to IAM tab\n8. Go to 'VIEW BY PRINCIPALS' tab\n9. Find the previously noted service account and click on 'Edit principal' button (pencil icon)\n10. Remove any binding to any basic role (roles/viewer or roles/editor or roles/owner)\n11. Click 'SAVE'.." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule= 'publicContainersList[*] contains insights-operational-logs and (totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist))' as X; config from cloud.resource where api.name = 'azure-monitor-log-profiles-list' as Y; filter '$.X.id contains $.Y.properties.storageAccountId'; show X;```,"Azure Storage account container storing activity logs is publicly accessible This policy identifies the Storage account containers containing the activity log export is publicly accessible. Allowing public access to activity log content may aid an adversary in identifying weaknesses in the affected account's use or configuration. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage accounts'\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the container named as 'insight-operational-logs'\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'." "```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' and json.rule = storageEncrypted is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals ""null"") as Y; filter '($.X.kmsKeyId equals $.Y.key.keyArn)'; show X;```","AWS RDS database instance encrypted with Customer Managed Key (CMK) is not enabled for regular rotation This policy identifies Amazon RDS instances that use Customer Managed Keys (CMKs) for encryption but are not enabled with key rotation. Amazon RDS instance encryption key rotation failure can result in prolonged exposure of sensitive data and potential compliance violations. As a security best practice, it is important to periodically rotate these keys. This ensures that if the keys are compromised, the data in the underlying service remains secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to enable the automatic rotation of the KMS key used by the RDS instance\n\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n4. Navigate to the 'RDS' service.\n5. Select the RDS instance reported in the alert, and click on the 'Configuration' tab.\n6. Under the 'Storage' section, click on the KMS key link in 'AWS KMS key'.\n7. Under the 'Key rotation' tab on the navigated KMS key window, enable the 'Automatically rotate this CMK every year' check box.\n8. Click on Save.." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(3306,3306)""```","Alibaba Cloud Security group allow internet traffic to MySQL port (3306) This policy identifies Security groups that allow inbound traffic on MySQL port (3306) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 3306, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND cloud.account = 'jScheel AWS Account' AND api.name = 'aws-route53-domain' AND json.rule = dnssecKeys[*] is empty```,"jScheel AWS Route53 domain configured without DNSSEC List of AWS Route53 domains configured without DNSSEC. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: https://aws.amazon.com/blogs/networking-and-content-delivery/configuring-dnssec-signing-and-validation-with-amazon-route-53/." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instance-template' AND json.rule = properties.canIpForward is true and (name does not start with ""gke-"" or (name starts with ""gke-"" and properties.disks[*].initializeParams.labels does not exist) )```","GCP VM instance template with IP forwarding enabled This policy identifies VM instance templates that have IP forwarding enabled. IP Forwarding could open unintended and undesirable communication paths and allows VM instances to send and receive packets with the non-matching destination or source IPs. To enable source and destination IP match check, disable the IP Forwarding. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP VM instance templates are used to create VM instances based on a preexisting configuration. GCP VM instance templates IP forwarding feature cannot be updated. After an instance template is created, the IP forwarding field becomes read-only. So to fix this alert, Create a new VM instance template with IP forwarding disabled, migrate all required data from the reported template to the newly created one, and delete the reported VM instance template.\n\nTo create a new VM Instance template with IP forwarding disabled:\n1. Login to GCP Portal\n2. Go to 'Computer Engine' (Left Panel)\n3. Go to 'Instance templates'\n4. Click on 'CREATE INSTANCE TEMPLATE'\n5. Specify the mandatory parameters as required\n6. Click 'Management, security, disk, networking, sole tenancy'\n7. Click 'Networking'\n8. Click on the specific Network interfaces\n9. Set 'IP forwarding' to 'Off'\n10. Click on 'Create' button\n\nTo Delete VM instance template which has IP forwarding enabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to 'Instance templates'\n4. From the list, choose the reported templates\n5. Click on the 'Delete' button." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case ""Running"" AND kind contains ""functionapp"" AND kind does not contain ""workflowapp"" AND kind does not equal ""app"" AND config.http20Enabled is false```","Azure Function App doesn't use HTTP 2.0 This policy identifies Azure Function App which doesn't use HTTP 2.0. HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritisation of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Platform settings', Set 'HTTP version' to '2.0'\n6. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\nAzure CLI Command - \""az functionapp config set --http20-enable true --name MyFunctionApp --resource-group MyResourceGroup\""." ```config from cloud.resource where api.name = 'gcloud-compute-backend-bucket' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' not (Y.name intersects X.bucketName) '; show X;```,"bobby gcp policy This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.policies.exportPolicy.status contains enabled or properties.publicNetworkAccess contains enabled)```,"Azure Container Registry with exports enabled This policy identifies Azure Container Registries with exports enabled. Azure Container Registries with exports enabled allows data in the registry to be moved out using commands like acr import or acr transfer. Export functionality can expose registry data, increasing the risk of unauthorized data movement. Disabling exports ensures that data in a registry is accessed only via the dataplane (e.g., docker pull) and cannot be moved out using other methods. As a security best practice, it is recommended to disable export configuration for Azure Container Registries. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To remediate the alert, ensure the registry is on the Premium service tier, disable public network access to turn off exports (supported only for managed registries in Premium SKU), and use the provided az command as this setting cannot be changed through the UI.\n\nCLI command: az acr update --name ${registryName} --allow-exports false --public-network-enabled false." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="") and ($.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="") and $.X.filter contains ""gce_route"" and ($.X.filter contains ""jsonPayload.event_subtype="" or $.X.filter contains ""jsonPayload.event_subtype ="") and ($.X.filter does not contain ""jsonPayload.event_subtype!="" and $.X.filter does not contain ""jsonPayload.event_subtype !="") and $.X.filter contains ""compute.routes.delete"" and $.X.filter contains ""compute.routes.insert""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for VPC network route changes This policy identifies the GCP account which does not have a log metric filter and alert for VPC network route changes. Monitoring network routes deletion and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the deletion and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gce_route"" AND jsonPayload.event_subtype=""compute.routes.delete"" OR jsonPayload.event_subtype=""compute.routes.insert""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ""acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier size > 0 and acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier does not contain c4c1ede66af53448b93c283ce9448c4ba468c9432aa01d700d3878632f77d2d0 and _AWSCloudAccount.isRedLockMonitored(acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier) is false""```","AWS S3 bucket accessible to unmonitored cloud accounts This policy identifies those S3 buckets which have either the read/write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read/write privileges should be reviewed and confirmed that these are valid accounts of your organization (or authorised by your organization) and are not active under Prisma Cloud monitoring. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the reported S3 bucket\n4. Click on the 'Permissions' tab\n5. Navigate to the 'Access control list (ACL)' section and Click on the 'Edit'\n6. Under 'Access for other AWS accounts', Add the Cloud Accounts that are monitored by Prisma Cloud\n7. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and sku.tier does not equal ignore case Basic and properties.publicNetworkAccess equal ignore case Enabled```,"Azure PostgreSQL database server deny public network access setting is not set This policy identifies Azure PostgreSQL database servers that have Deny public network access setting is not set. When 'Deny public network access' is set to yes, only private endpoint connections will be allowed to access this resource. It is highly recommended to set Deny public network access setting to Yes, which would allow PostgreSQL database server to be accessed only through private endpoints. Note: This feature is available in all Azure regions where Azure Database for PostgreSQL - Single server supports General Purpose and Memory Optimized pricing tiers. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for PostgreSQL servers'\n3. Click on the reported PostgreSQL server instance you want to modify \n4. Select 'Connection security' under 'Settings' from left panel \n5. For 'Deny public network access' ensure 'Deny public network access' is set to 'Yes'\n6. Click on 'Save'\n\nNote: When 'Deny public network access' is set to yes, only private endpoint connections will be allowed to access this resource.." ```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = scheme equals internet-facing and type equals application as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.resources.applicationLoadBalancer[*] contains $.X.loadBalancerArn'; show X;```,"AWS ALB attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS Application Load Balancer (ALB) attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, Application Load Balancer (ALB) attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to EC2 Dashboard, and select 'Load Balancers'\n3. Make sure your reported Application Load Balancer requires WAF based on your requirement and Note down the load balancer name\n4. Navigate to WAF & Shield Service\n5. Go to the Web ACL associated to the reported Application Load Balancer\n6. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n7. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n8. Click on 'Add rules'." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.allowCrossTenantReplication exists and properties.allowCrossTenantReplication is true```,"Azure 'Cross Tenant Replication' is enabled This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = settings[?any( name equals WDATP and properties.enabled is false )] exists```,"Azure Microsoft Defender for Cloud WDATP integration Disabled This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has Microsoft Defender for Endpoint (WDATP) integration disabled. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for WDATP. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Integrations'\n6. Check/Enable option 'Allow Microsoft Defender for Endpoint to access my data'\n7. Select 'Save'." "```config from cloud.resource where api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.SqsManagedSseEnabled equals ""false"" and attributes.KmsMasterKeyId does not exist```","RomanTest - Ensure SQS service is encrypted at-rest This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultCacheBehavior.viewerProtocolPolicy contains ""allow-all"" or cacheBehaviors.items[?any( viewerProtocolPolicy contains ""allow-all"" )] exists```","AWS CloudFront viewer protocol policy is not configured with HTTPS For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects, so connections are encrypted when CloudFront communicates with viewers. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Configure CloudFront to require HTTPS between viewers and CloudFront.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Behaviors' tab.\n4. Check the behavior you want to modify then select Edit.\n5. Choose 'HTTPS Only' or 'Redirect HTTP to HTTPS' for Viewer Protocol Policy.\n6. Select 'Yes, Edit.'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-glue-connection' AND json.rule = ((connectionType equals KAFKA and connectionProperties.KAFKA_SSL_ENABLED is false) or (connectionType does not equal KAFKA and connectionProperties.JDBC_ENFORCE_SSL is false)) and connectionType does not equal ""NETWORK""```","AWS Glue connection do not have SSL configured This policy identifies the Glue connections that are not configured with SSL to encrypt connections. It is recommended to use an SSL connection with hostname matching is enforced for the DB connection on the client; enforcing SSL connections help protect against 'man in the middle' attacks by encrypting the data stream between connections. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to AWS Glue service\n4. Click on 'Connections', Click on the reported Connection\n5. Click on 'Edit'\n6. On the 'Edit connection' page, Select 'Require SSL connection'\n7. Click on 'Next'\n8. Click on 'Finish'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-network-acls' AND json.rule = ""entries[?any(egress equals false and ((protocol equals 6 and ((portRange.to equals 22 or portRange.to equals 3389 or portRange.from equals 22 or portRange.from equals 3389) or (portRange.to > 22 and portRange.from < 22) or (portRange.to > 3389 and portRange.from < 3389))) or protocol equals -1) and (cidrBlock equals 0.0.0.0/0 or ipv6CidrBlock equals ::/0) and ruleAction equals allow)] exists""```","AWS Network ACLs allow ingress traffic on Admin ports 22/3389 This policy identifies the AWS Network Access Control List (NACL) which has a rule to allow ingress traffic to server administration ports. AWS NACL provides filtering of ingress and egress network traffic to AWS resources. Allowing ingress traffic on admin ports 22 (SSH) and 3389 (RDP) via AWS Network ACLs increases the vulnerability of EC2 instances and other network resources to unauthorized access and cyberattacks. It is recommended that no NACL allows unrestricted ingress access to server administration ports, such as SSH port 22 and RDP port 3389. NOTE: This policy may report NACLs, which include the deny policy in the rule set. Make sure while remediating the rule set does not consist of the Allow and Deny rule set together; which leads to overlap on each ruleset. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the AWS Network Access Control List perform the following actions:\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section.\n3. Select the reported Network ACL\n4. Click on 'Actions' and select 'Edit inbound rules'\n5. Click on Delete towards the right of rule which has source '0.0.0.0/0' or '::/0' and shows 'ALLOW and 'Port Range' which includes port 22 or 3389 or 'Port Range' shows 'ALL'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.state equal ignore case running and properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled and config.ipSecurityRestrictions[?any(action equals Allow and ipAddress equals Any)] exists'```,"Azure App Service web apps with public network access This policy identifies Azure App Service web apps that are configured with public network access. Publicly accessible web apps could allow malicious actors to remotely exploit if any vulnerabilities and could. It is recommended to configure the App Service web apps with private endpoints so that the web apps hosted are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict App Service network access, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = properties.logs[?any((enabled is true and category equals Administrative))] exists and properties.logs[?any((enabled is true and category equals Alert))] exists and properties.logs[?any((enabled is true and category equals Policy))] exists and properties.logs[?any((enabled is true and category equals Security))] exists as X; count(X) less than 1```,"Azure Monitor Diagnostic Setting does not captures appropriate categories This policy identifies Azure Monitor Diagnostic Setting which does not captures appropriate categories. Capturing appropriate diagnostic setting categories allows proper alerting. It is recommended to select Administrative, Alert, Policy, and Security diagnostic setting categories. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to 'Monitor' and select 'Activity log'\n3. Click on 'Diagnostic settings' in top pane\n4. Select 'Add diagnostic setting' if no 'Diagnostic settings' present\nOR\nClick on 'Edit setting' for the existing 'Diagnostic settings'\n5. Under 'Category details', select 'Administrative', 'Alert', 'Policy', and 'Security' for 'log'\n6. Add 'Destination details' and other required fields\n7. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.scopes[*] does not contain resourceGroups and properties.enabled equals true and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Authorization/policyAssignments/delete"" as X; count(X) less than 1```","Azure Activity log alert for delete policy assignment does not exist This policy identifies the Azure accounts in which activity log alert for Delete policy assignment does not exist. Creating an activity log alert for Delete policy assignment gives insight into changes done in azure policy - assignments and may reduce the time it takes to detect unsolicited changes. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete policy assignment (Microsoft.Authorization/policyAssignments)' and Other fields you can set based on your custom settings.\n6. Click on Create." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```,"Azure MySQL Database Server using insecure TLS version This policy identifies Azure MySQL Database Servers which are using insecure TLS version. As a security best practice, use the newer TLS version as the minimum TLS version for Azure MySQL Database Server. Currently, Azure MySQL Database Server supports TLS 1.2 version which resolves the security gap from its preceding versions. https://docs.microsoft.com/en-gb/azure/mysql/howto-tls-configurations This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure TLS settings on reported Azure MySQL Database Server, follow the below-mentioned URL:\nhttps://docs.microsoft.com/en-gb/azure/mysql/howto-tls-configurations." ```config from cloud.resource where api.name = 'aws-describe-mount-targets' AND json.rule = fileSystemDescription.encrypted is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '$.X.fileSystemDescription.kmsKeyId equals $.Y.key.keyArn'; show X;```,"AWS Elastic File System (EFS) not encrypted using Customer Managed Key This policy identifies Elastic File Systems (EFSs) which are encrypted with default KMS keys and not with Keys managed by Customer. It is a best practice to use customer managed KMS Keys to encrypt your EFS data. It gives you full control over the encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS EFS Encryption of data at rest can only be enabled during file system creation. So to resolve this alert, create a new EFS with encryption enabled with the customer-managed key, then migrate all required data from the reported EFS to this newly created EFS and delete reported EFS.\n\nTo create new EFS with encryption enabled, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Click on 'Create file system' button\n6. On the 'Configure file system access' step, specify EFS details as per your requirements and Click on 'Next Step'\n7. On the 'Configure optional settings' step, Under 'Enable encryption' Choose 'Enable encryption of data at rest' and Select customer managed key [i.e. Other than (default)aws/elasticfilesystem] from 'Select KMS master key' dropdown list along with other parameters and Click on 'Next Step'\n8. On the 'Review and create' step, Review all your setting and Click on 'Create File System' button\n\nTo delete reported EFS which does not has encryption, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Select the reported file system\n6. Click on 'Actions' drop-down\n7. Click on 'Delete file system'\n8. In the 'Permanently delete file system' popup box, To confirm the deletion enter the file system's ID and Click on 'Delete File System'." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = metrics_monitoring does not exist or metrics_monitoring.request_metrics_enabled does not equal ignore case ""true"" or metrics_monitoring.usage_metrics_enabled does not equal ignore case ""true""```","IBM Cloud Object Storage bucket is not enabled with IBM Cloud Monitoring This policy identifies IBM Cloud Object Storage buckets which have Monitoring disabled or not enabled properly. Use IBM Cloud Monitoring to gain operational visibility into the performance and health of your applications, services, and platforms. So, it is recommended to enable Monitoring to monitor all usage/request metrics of a bucket. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the object storage instance in which the reported bucket resides.\n3. Select the bucket and click on 'Configuration' tab.\n4. Navigate to 'Monitoring', click on 'Create' button if it is not enabled already.\n5. If already enabled, click on three dots and click 'Edit'.\n6. Select 'Usage Metrics' and 'Request Metrics' checkboxes to get all metrics.\n7. Click on 'Save'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-db-list' AND json.rule = 'securityAlertPolicy does not exist or securityAlertPolicy[*] is empty or (securityAlertPolicy.properties.state equals Enabled and securityAlertPolicy.properties.emailAccountAdmins equals Disabled)'```,"Azure SQL Databases with disabled Email service and co-administrators for Threat Detection This policy identifies Azure SQL Databases which have ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' not configured. This setting enables ADS - VA scan reports being sent to admins and subscription owners. It is recommended to enable 'Also send email notifications to admins and subscription owners' setting, which would help in reducing time required for identifying risks and taking corrective measures. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to SQL databases (Left Panel)\n3. Choose the reported each DB instance\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n6. In 'VULNERABILITY ASSESSMENT SETTINGS' section, Ensure 'Also send email notifications to admins and subscription owners' is checked\n7. 'Save' your changes." "```config from cloud.resource where api.name = 'aws-docdb-db-cluster-parameter-group' AND json.rule = parameters.tls.ParameterValue equals ""disabled"" as X; config from cloud.resource where api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals available as Y; filter '$.X.DBClusterParameterGroupName equals $.Y.DBClusterParameterGroup'; show Y;```","AWS DocumentDB Cluster is not enabled with data encryption in transit This policy identifies Amazon DocumentDB Clusters for which data encryption in transit is disabled. Each DocumentDB Cluster is associated with a Cluster Parameter Group. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and the cluster. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To modify the Parameter group\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Click on the 'Parameter groups' (Left panel)\n5. Select the db cluster parameter group which is associated with the DocumentDB cluster on which the alert is generated\n6. Select the \""tls\"" parameter\n7. Click on \""Edit\"" button\n8. Set value to \""enabled\""\n9. Click on \""Modify cluster parameter\"" button\n\nTo restart the Document DB cluster\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Click on the 'Clusters' (Left panel)\n5. Select the db cluster parameter group which is associated with the DocumentDB cluster on which the alert is generated, and choose the button to the left of its name.\n6. Choose \""Actions\"", and then \""Reboot\"".\n7. Click on \""Reboot\"" button.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-v2-rule-group' AND json.rule = VisibilityConfig.CloudWatchMetricsEnabled is false or Rules[?any( VisibilityConfig.CloudWatchMetricsEnabled is false)] exists```,"AWS WAF Rule Group CloudWatch metrics disabled This policy identifies the AWS WAF Rule Group having CloudWatch metrics disabled. AWS WAF rule groups have CloudWatch metrics that provide information about the number of allowed and blocked web requests, counted requests, and requests that pass through without matching any rule in the rule group. These metrics can be used to monitor and analyse the performance of the web access control list (web ACL) and its associated rules. It is recommended to enable CloudWatch metrics for a WAF rule group to help in monitoring and analysis of web requests. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable RuleGroup with CloudWatch metrics please follow below steps:\n\n1. Run the below command to get the ruleGroup details to be used for update\n aws wafv2 list-rule-groups --scope {scopeOfRuleGroup}\n2. Get the ruleGroup 'Id' and 'LockToken' values for the ruleGroup to be updated from the output.\n3. Run the below command with name and 'Id' obtained from above output\n aws wafv2 get-rule-group --name {ruleGroupName} --scope {scopeOfRuleGroup} --id {IdFromAboveOutput}\n4. Get the 'Rules' block output and save it in a file for further reference from above command output\n5. Please update 'CloudWatchMetricsEnabled' field to true for every rule in the file saved from above along with providing a metric name at the 'MetricName' field\n6. Run the below command to enable CloudWatch metrics on the ruleGroup.\n aws wafv2 update-rule-group \n --name {ruleGroupName} \n --scope {scopeOfRuleGroup} \n --id {ruleGroupId} \n --lock-token {tokenFromAboveOutput} \n --rules file://{fileFromAboveOutput}\n --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName= \n {metricNameForRuleGroup)." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-vpn-connections-summary' AND json.rule = 'vpnConnectionsSummary[*].vpnConnectionsCount greater than 7'```,"AWS regions nearing VPC Private Gateway IPSec Limit This policy identifies if your account is near the private gateway IPSec limitation per VPC per Region. AWS provides a reasonable starting limitation for the maximum number of VPC Private Gateway IPSec connections you can assign in each VPC. If you approach the limit in a particular VPC, this alert indicates that you have nearly exhausted your allocation. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to VPC Dashboard\n4. Click on 'Site-to-Site VPN Connections' (Left Panel)\n5. Choose the VPN connection you want to delete, which more used or required\n6. Click on 'Actions' dropdown\n7. Click on 'Delete'\n8. On 'Delete' popup dialog, Click on 'Delete'\nNOTE: If existing VPN Connection is properly associated and exhausted your VPC Site-to-Site VPN Connections limit allocation, you can contact AWS for a service limit increase.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = restrict_create_service_id does not equal ""RESTRICTED"" ```","IBM Cloud Service ID creation is not restricted in account settings This policy identifies IBM cloud accounts where Service ID creation is not restricted in account settings. By default, all members of an account can create service IDs. Enabling Service ID creation setting will restrict the users from creating service IDs unless correct access is granted explicitly. It is recommended to enable Service ID creation setting and grant access only on a need basis. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable the restriction to create service IDs:\n\nhttps://cloud.ibm.com/docs/account?topic=account-restrict-service-id-create&interface=ui#enable-restrict-create-serviceid-ui." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.addonProfiles.omsagent.config does not exist or properties.addonProfiles.omsagent.enabled is false```,"Azure AKS cluster monitoring not enabled Azure Monitor for containers is a feature designed to monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. This policy checks your AKS cluster monitoring add-on setting and alerts if no configuration is found, or monitoring is disabled. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable monitoring for your AKS cluster, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/monitor-aks#configure-monitoring." ```config from cloud.resource where cloud.type = 'AWS' and api.name = 'aws-ec2-describe-subnets' AND json.rule = 'mapPublicIpOnLaunch is true'```,"AWS VPC subnets should not allow automatic public IP assignment This policy identifies VPC subnets which allow automatic public IP assignment. VPC subnet is a part of the VPC having its own rules for traffic. Assigning the Public IP to the subnet automatically (on launch) can accidentally expose the instances within this subnet to internet and should be edited to 'No' post creation of the Subnet. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to the 'VPC' service.\n4. In the navigation pane, click on 'Subnets'.\n5. Select the identified Subnet and choose the option 'Modify auto-assign IP settings' under the Subnet Actions.\n6. Disable the 'Auto-Assign IP' option and save it.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'logging.enabled is false and logging.bucket is empty'```,"AWS CloudFront distribution with access logging disabled This policy identifies CloudFront distributions which have access logging disabled. Enabling access log on distributions creates log files that contain detailed information about every user request that CloudFront receives. Access logs are available for web distributions. If you enable logging, you can also specify the Amazon S3 bucket that you want CloudFront to save files in. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On 'General' tab, Click on 'Edit' button\n6. On 'Edit Distribution' page, Set 'Logging' to 'On', choose a 'Bucket for Logs' and 'Log Prefix' as desired\n7. Click on 'Yes, Edit'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='log_disconnections')].properties.value equals OFF or configurations.value[?(@.name=='log_disconnections')].properties.value equals off""```","Azure PostgreSQL database server with log disconnections parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log disconnections. Enabling log_disconnections helps PostgreSQL Database to Logs end of a session, including duration, which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_disconnections' and set it to on\n6. Click on 'Save' button from top menu to save the change.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'passwordReusePrevention !isType Integer or passwordReusePrevention == 0'```,"Alibaba Cloud RAM password history check policy is disabled This policy identifies Alibaba Cloud accounts for which password history check policy is disabled. As a best practice, enable RAM password history check policy to prevent RAM users from reusing a specified number of previous passwords. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password History Check Policy' field, enter the value between 1 to 24 instead of 0 based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = config.remoteDebuggingEnabled is true```,"mosh-stam2 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: mosh_ recommendation." ```config from cloud.resource where api.name = 'aws-secretsmanager-describe-secret' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.kmsKeyId does not exist ) or ($.X.kmsKeyId exists and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```,"AWS Secrets Manager secret not encrypted by Customer Managed Key (CMK) This policy identifies AWS Secrets Manager secrets that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using a CMK that is disabled. AWS Secrets Manager secrets are a secure storage solution for sensitive information like passwords, API keys, and tokens in the AWS cloud. Secrets Manager secrets are encrypted by default by AWS managed key but users can specify CMK to get enhanced security, control over the encryption key, and also comply with any regulatory requirements. As a security best practice, using CMK to encrypt your Secrets Manager secrets is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the encryption key for a Secrets Manager secret:\n1. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.\n2. From the list of secrets, choose the reported secret.\n3. On the secret details page, in the Secrets details section, choose Actions, and then choose 'Edit encryption key'.\n4. in the 'Encryption key' section choose the Customer Managed Key created and managed by you in AWS Key Management Service (KMS) based on your business requirement.\n5. Click 'Save' button to save the changes.\nNote: When using customer-managed CMKs to encrypt Secrets Manager secret, Ensure authorized entities have access to the key and its associated operations.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE AND continuousBackupsDescription.pointInTimeRecoveryDescription.pointInTimeRecoveryStatus does not equal ENABLED```,"AWS DynamoDB table point-in-time recovery (PITR) disabled This policy identifies AWS DynamoDB tables that does not have point-in-time recovery (backup) enabled. AWS DynamoDB enables you to back up your table data continuously by using point-in-time recovery (PITR) with per-second granularity. This helps in protecting your data against accidental write or delete operations. It is recommended to enable point-in-time recovery functionality on the DynamoDB table to protect data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Point-in-Time Recovery (PITR) for a DynamoDB table, you can follow these steps:\n\n1. Sign in to the AWS Management Console.\n2. Navigate to the DynamoDB service.\n3. Click on the 'Tables' in the left navigation pane.\n4. Select the table you want to enable Point-in-Time Recover (PITR) for.\n5. Switch to the 'Backups' tab and click on 'Edit' next to Point-in-time recovery.\n6. Click on the 'Turn on point-in-time recovery' check box and Click on 'Save changes'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(445,445) or destinationPortRanges[*] contains _Port.inRange(445,445) ))] exists```","Azure Network Security Group allows all traffic on CIFS (UDP Port 445) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows SMB UDP port 445. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.encryption.status does not exist or properties.encryption.status equal ignore case disabled)```,"Azure Machine Learning workspace not encrypted with Customer Managed Key (CMK) This policy identifies Azure Machine Learning workspaces that are not encrypted with a Customer Managed Key (CMK). Azure handles encryption using platform-managed keys by default, but customer-managed keys (CMKs) provide greater control and help meet specific security and compliance requirements. Without CMKs, organizations may not have full control over key management and rotation, increasing the risk of compliance issues and unauthorized data access. Configuring the workspace to use CMKs enhances security by allowing organizations to manage key access and rotation, ensuring stronger protection and compliance for sensitive data. As a security best practice, it is recommended to configure the workspace to use Customer Managed Keys (CMKs). This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Once a Azure Machine Learning workspace is deployed, you can't switch from Microsoft-managed keys to customer-managed keys. You'll need to delete and recreate the workspace with customer-managed keys enabled.\n\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the reported Azure Machine Learning workspace\n4. Delete the workspace and then recreate it, ensuring you enable 'Encrypt data using a customer-managed key' under the 'Encryption' tab." ```config from cloud.resource where api.name = 'aws-ec2-describe-flow-logs' as X; config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = shared is false as Y; filter 'not($.X.resourceId equals $.Y.vpcId)' ; show Y;```,"AWS VPC Flow Logs not enabled This policy identifies VPCs which have flow logs disabled. VPC Flow logs capture information about IP traffic going to and from network interfaces in your VPC. Flow logs are used as a security tool to monitor the traffic that is reaching your instances. Without the flow logs turned on, it is not possible to get any visibility into network traffic. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Your VPCs' and Choose the reported VPC\n5. Click on the 'Flow logs' tab and follow the instructions as in link below to enable Flow Logs for the VPC:\nhttps://aws.amazon.com/blogs/aws/vpc-flow-logs-log-and-view-network-traffic-flows/." "```config from cloud.resource where api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = primary.state equals ""ENABLED"" and (rotationPeriod does not exist or rotationPeriod greater than 7776000) as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' $.X.name equals $.Y.encryption.defaultKmsKeyName'; show Y;```","GCP Storage bucket CMEK not rotated every 90 days This policy identifies GCP Storage bucket with CMEK that are not rotated every 90 days A CMEK (Customer-Managed Encryption Key), which is configured for a GCP bucket becomes vulnerable over time due to prolonged use. Without regular rotation, the key is at greater risk of being compromised, which could lead to unauthorized access to the encrypted data in the bucket. This can undermine the security of your data and increase the chances of a breach if the key is exposed or exploited. It is recommended to configure rotation less than 90 days for CMEKs used for GCP buckets. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate Cloud Storage Buckets page\n3. Click on the reported bucket\n4. Go to 'Configuration' tab\n5. Under 'Default encryption key', click on the key name\n6. Click on 'EDIT ROTATION PERIOD'\n7. Select 90 days or less for 'Rotation period' dropdown\n8. Click 'SAVE'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals AppServices and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for App Service This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for App Service is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for App Service. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'App Service' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals ""ACTIVE"" and serviceAccount contains ""compute@developer.gserviceaccount.com"" as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as Y; filter ' $.X.serviceAccount equals $.Y.user'; show X;```","GCP Vertex AI Workbench user-managed notebook is using default service account with the editor role This policy identifies GCP Vertex AI Workbench user-managed notebooks that are using the default service account with the editor role. When you create a new Vertex AI Workbench user-managed notebook, the compute engine default service account is associated with the notebook by default if any other service account is not configured. The compute engine default service account is automatically created when the Compute Engine API is enabled and is granted the IAM basic Editor role if you have not disabled this behavior explicitly. These permissions can be exploited to get admin access to the GCP project. To be compliant with the principle of least privileges and prevent potential privilege escalation, it is recommended that Vertex AI Workbench user-managed notebooks are not assigned the 'Compute Engine default service account' especially when the editor role is granted to the service account. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Identity and API access', use the dropdown to select a non-default service account as per needs\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'azure' AND cloud.service = 'Azure Network Watcher' AND api.name = 'azure-network-watcher-list' AND json.rule = ' provisioningState !exists or provisioningState != Succeeded'```,"Azure Network Watcher is not enabled This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND cloud.account = 'Azure_Redlock_QA_BVT_25FE' AND api.name = 'azure-disk-list' AND json.rule = id exists ```,"dnd-azure-disk-flip-flop-policy This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'oci-networking-loadbalancer' and json.rule = lifecycleState equal ignore case ""ACTIVE"" as X; config from cloud.resource where api.name = 'oci-networking-subnet' and json.rule = lifecycleState equal ignore case ""AVAILABLE"" as Y; config from cloud.resource where api.name = 'oci-networking-security-list' AND json.rule = lifecycleState equal ignore case AVAILABLE as Z; filter 'not ($.X.listeners does not equal ""{}"" and ($.X.subnetIds contains $.Y.id and $.Y.securityListIds contains $.Z.id and $.Z.ingressSecurityRules is not empty))'; show X;```","OCI Load Balancer not configured with inbound rules or listeners This policy identifies Load Balancers that are not configured with inbound rules or listeners. A Load Balancer's subnet security lists should include ingress rules, and the Load Balancer should have at least one listener to handle incoming traffic. Without these configurations, the load balancer cannot receive and route incoming traffic, rendering it ineffective. As best practice, it is recommended to configure Load Balancers with proper inbound rules and listeners. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Load Balancers with inbound rules and listeners, refer to the following documentation:\nhttps://docs.cloud.oracle.com/iaas/Content/Security/Reference/configuration_tasks.htm#lb-enable-traffic." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = allowedListIPAddresses[*] size equals 0 or allowedListIPAddresses[?any( address equals 0.0.0.0/0 )] exists```,"IBM Cloud MySQL Database network access is not restricted to a specific IP range This policy identifies IBM Cloud MySQL Databases with no specified IP range for network access. To restrict access to your databases, you can allowlist specific IP addresses or ranges of IP addresses on your deployment. When no IP addresses are in the allowlist, the allowlist is disabled and the deployment accepts connections from any IP address. It is recommended to create an allowlist, only IP addresses that match the allowlist or are in the range of IP addresses in the allowlist can connect to your deployment This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select MySQL database reported in the alert.\n3. Refer below URL for setting allowlist IP addresses : https://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-allowlisting&interface=ui#set-allowlist-ui\n4. Please remove IP address starting with '0.0.0.0' if any added already in the allow list and make sure to add IP address other than '0.0.0.0'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = blockDeviceMappings[*].ebs exists AND blockDeviceMappings[?any(ebs.encrypted is false)] exists```,"AWS EC2 Auto Scaling Launch Configuration is not using encrypted EBS volumes This policy identifies AWS EC2 Auto Scaling Launch Configurations that are not using encrypted EBS volumes. A launch configuration defines an instance configuration template that an Auto Scaling group uses to launch EC2 instances. Amazon Elastic Block Store (EBS) volumes allow you to create encrypted launch configurations when creating EC2 instances and auto scaling groups. When the entire EBS volume is encrypted, data stored at rest, in-transit, and snapshots are encrypted. This protects the data from unauthorized access. As a security best practice for data protection, enable encryption for all EBS volumes at auto scaling launch configuration. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Once Auto Scaling Launch Configuration is created you can not modify the encryption for the EBS volumes. To reslove this alert you need copy the reported launch configuration, create new launch template using copied launch configuration data and select the encryption option for EBS vloumes. Later delete the reported launch configuration.\n\nTo create a new launch template,\n1. Log in to AWS console\n2. Navigate to the Amazon EC2 dashboard\n3. Under 'Auto Scaling' section, select the 'Auto Scaling groups'\n4. Click on 'Launch Templates'\n5. On 'Launch Templates' page, click on 'Create launch template'\n6. Create new lauch template by mentioning all data same as reported launch configuration.\n7. Under 'Storage (volumes)', make sure 'Encrypted' set for all EBS volumes you added.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = rotationEnabled is false and owningService is not member of (appflow, databrew, datasync, directconnect, events, opsworks-cm, rds, sqlworkbench)```","AWS Secret Manager Automatic Key Rotation is not enabled This policy identifies AWS Secret Manager that are not enabled with key rotation. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. NOTE: This policy does not include secret manager which are managed by some of the AWS services that store AWS Secrets Manager secrets on your behalf. Refer: https://docs.aws.amazon.com/secretsmanager/latest/userguide/service-linked-secrets.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Automatic Key Rotation for AWS Secret Manager follow the steps mentioned in below URL:\n\nhttps://aws.amazon.com/blogs/security/how-to-configure-rotation-windows-for-secrets-stored-in-aws-secrets-manager/#:~:text=Use%20Case%203." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case ContainerRegistriesVulnerabilityAssessments AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)```,"Azure Microsoft Defender for Cloud set to Off for Agentless container vulnerability assessment This policy identifies Azure Microsoft Defender for Cloud where the Agentless container vulnerability assessment is set to Off. Agentless container vulnerability assessment enables automatic scanning for vulnerabilities in container images stored in Azure Container Registry or running in Azure Kubernetes Service without additional agents. Disabling it exposes container images to unpatched security issues and misconfigurations, risking exploitation and data breaches. Enabling agentless container vulnerability assessment ensures continuous scanning for known vulnerabilities, enhancing security by proactively identifying risks and providing remediation suggestions to maintain compliance with industry standards. As a security best practice, it is recommended to enable Agentless container vulnerability assessment in Azure Microsoft Defender for Cloud. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'Agentless container vulnerability assessment' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-admin-consent-request-policy' AND json.rule = ['@odata.context'] exists```,"pcsup-26179-policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-msk-cluster' AND json.rule = state equal ignore case active and enhancedMonitoring is member of (DEFAULT, PER_BROKER)```","AWS MSK clusters not configured with enhanced monitoring This policy identifies MSK clusters that are not configured with enhanced monitoring. Amazon MSK is a fully managed Apache Kafka service on AWS that handles the provisioning, setup, and maintenance of Kafka clusters. Amazon MSK's PER_TOPIC_PER_BROKER monitoring level provides granular insights into the audit, performance and resource utilization of individual topics and brokers, enabling you to identify and optimize bottlenecks in your Kafka cluster. It is recommended to enable at least PER_TOPIC_PER_BROKER monitoring on the MSK cluster to get enhanced monitoring capabilities. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure MSK clusters with enhanced monitoring:\n\n1. Sign in to the AWS console. Navigate to the Amazon MSK console.\n2. In the navigation pane, choose 'Clusters'. Then, select the reported cluster.\n3. For 'Action', select 'Edit monitoring'.\n4. Select either 'Enhanced partition-level monitoring' or 'Enhanced topic-level monitoring' option.\n5. Choose 'Save changes'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-definition' AND json.rule = properties.permissions[*].actions any start with ""*"" and properties.permissions[*].actions any end with ""*"" and properties.type equal ignore case ""CustomRole"" and properties.assignableScopes starts with ""/subscriptions"" and properties.assignableScopes does not contain ""resourceGroups""```","Azure Custom subscription administrator roles found This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = isNodeVersionSupported exists AND isNodeVersionSupported does not equal ""true""```","GCP GKE unsupported node version This policy identifies the GKE node version and generates an alert if the version running is unsupported. Using an unsupported version of Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) can lead to several potential issues and risks, such as security vulnerabilities, compatibility issues, performance and stability problems, and compliance concerns. To mitigate these risks, it's crucial to regularly update the GKE clusters to supported versions recommended by Google Cloud. As a security best practice, it is always recommended to use the latest version of GKE. Note: The Policy updates will be made as per the release schedule https://cloud.google.com/kubernetes-engine/docs/release-schedule#schedule-for-release-channels This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Manually upgrading your nodes:\n\n1. Visit the Google Kubernetes Engine Clusters menu in GCP Console.\n2. Next to the cluster you want to edit, Click the Edit button which looks like a pencil under Actions.\n3. On the Cluster details page, click the Nodes tab.\n4. In the Node Pools section, click the name of the node pool that you want to upgrade.\n5. Click the Edit button which looks like a pencil.\n6. Click ""Change"" under Node version.\n7. Select the desired version from the Node version drop-down list, then click ""Upgrade"".." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-classic-web-acl-resource' AND json.rule = '(resources.applicationLoadBalancer[*] exists or resources.apiGateway[*] exists or resources.other[*] exists) and loggingConfiguration.resourceArn does not exist'```,"AWS Web Application Firewall (AWS WAF) Classic logging is disabled This policy identifies Classic Web Application Firewalls (AWS WAFs) for which logging is disabled. Enabling WAF logging, logs all web requests inspected by the service which can be used for debugging and additional forensics. The logs will help to understand why certain rules are triggered and why certain web requests are blocked. You can also integrate the logs with any SIEM and log analysis tools for further analysis. It is recommended to enable logging on your Classic Web Application Firewalls (WAFs). For details: https://docs.aws.amazon.com/waf/latest/developerguide/classic-logging.html NOTE: Global (CloudFront) WAF resources are out of scope for this policy. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on your reported WAFs, follow below mentioned URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/classic-logging.html#logging-management\n\nNOTE: No additional cost to enable logging on AWS WAF (minus Kinesis Firehose and any storage cost).\nFor Kinesis Firehose or any storage additional charges refer https://aws.amazon.com/cloudwatch/pricing/." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = networkConfig.datapathProvider does not equal ADVANCED_DATAPATH and (addonsConfig.networkPolicyConfig.disabled is true or networkPolicy.enabled does not exist or networkPolicy.enabled is false )```,"GCP Kubernetes Engine Clusters have Network policy disabled This policy identifies Kubernetes Engine Clusters which have disabled Network policy. A network policy defines how groups of pods are allowed to communicate with each other and other network endpoints. By enabling network policy in a namespace for a pod, it will reject any connections that are not allowed by the network policy. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under 'Networking', Click on EDIT button for 'Calico Kubernetes Network policy'\n6. Select 'Enable Calico Kubernetes network policy for control plane'\n7. Click on Save\n8. Repeat Step 5 and Select 'Enable Calico Kubernetes network policy for nodes'\n9. Click on Save." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = (properties.supportsHttpsTrafficOnly does not exist or properties.supportsHttpsTrafficOnly is false) as X; config from cloud.resource where api.name = 'azure-storage-file-shares' as Y; filter '($.X.kind does not equal ignore case ""FileStorage"") or ($.X.kind equal ignore case ""FileStorage"" and $.Y.id contains $.X.name and $.Y.properties.enabledProtocols does not contain NFS)'; show X;```","Azure Storage Account without Secure transfer enabled This policy identifies Storage accounts which have Secure transfer feature disabled. The secure transfer option enhances the security of your storage account by only allowing requests to the storage account by a secure connection. When ""secure transfer required"" is disabled,REST APIs to access your storage accounts may connect over insecure HTTP which is not advised. Hence, it is highly recommended to enable secure transfer feature on your storage account. NOTE: Azure storage doesn't support HTTPs for custom domain names, this option is not applied when using a custom domain name. Additionally, this property is not applicable for NFS Azure file shares to work. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable secure transfer feature on your storage account, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/common/storage-require-secure-transfer#require-secure-transfer-for-an-existing-storage-account." "```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND resource.status = Active AND json.rule = tags[*].key none equal ""application"" AND tags[*].key none equal ""Application""```","pcsup-gcp-policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects (""all-accounts"") as X; config from cloud.resource where api.name = 'aws-ec2-describe-subnets' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects (""all-accounts"") as Y; filter '$.X.vpcId equals $.Y.vpcId'; show Y;```","jashah_ms_join_pol This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'maxLoginAttemps !isType Integer or maxLoginAttemps == 0'```,"Alibaba Cloud RAM password retry constraint policy is disabled This policy identifies Alibaba Cloud accounts for which password retry constraint policy is disabled. As a best practice, enable RAM password retry constraint policy to prevent multiple login attempts with an incorrect password within an hour. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Retry Constraint Policy' field, enter the value between 1 to 32 instead of 0 based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case ""application"" and operating_status equal ignore case ""online"" and is_public is true```","IBM Cloud Application Load Balancer for VPC has public access enabled This policy identifies IBM Cloud Application Load Balancer for VPC which has public access enabled. Creating a load balancer with public access will lead to unexpected malicious requests getting sent to the public DNS address assigned. A private load balancer is only accessible from within a specific virtual private cloud (VPC). It is highly recommended to use load balancers of type private to protect from unauthorized access. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A Load balancer can be made private only at the time of creation. To create a private application\nload balancer, follow below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-load-balancer&interface=ui\nMake sure to select 'Private' for load balancer 'Type' under 'details' section.\n\nNote: Please make sure to create new load balancer in accordance with alerted resource.\nAlso update load balancer reference at all the clients/places of usage with newly created\nload balancer.." ```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-app-service-web-apps-configurations' as Y; config from cloud.resource where api.name = 'azure-app-service' AND json.rule = 'kind contains functionapp and kind does not contain workflowapp and kind does not equal app and properties.state equal ignore case running and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist)) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists' as Z; filter ' $.Y.properties.azureStorageAccounts contains $.X.name and $.Z.name equal ignore case $.Y.name' ; show Z;```,"Azure Function App with public access linked to Blob Storage This policy identifies Azure Function Apps configured with public access and linked to Azure Blob Storage. Azure Function Apps often access Blob Storage to retrieve or store data. When public access is enabled for the Function App, it exposes the application and, potentially, the associated Blob Storage to unauthorized access, leading to potential security risks. As a security best practice, it is recommended to evaluate public access for Azure Function Apps and secure Azure Blob Storage. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict access to App Service and secure Azure Blob Storage, refer to the following links for security recommendations:\n\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions\nhttps://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and properties.httpsOnly is false```,"Azure Logic app does not redirect HTTP requests to HTTPS This policy identifies Azure Logic apps that fail to redirect HTTP traffic to HTTPS. By default, Azure Logic app data is accessible through unsecured HTTP traffic. HTTP does not include any encryption and data sent over HTTP is susceptible to interception and eavesdropping. To secure web traffic, use HTTPS which incorporates encryption through SSL/TLS protocols, providing a secure channel over which data can be transmitted safely. As a security best practice, it is recommended to configure HTTP to HTTPS redirection to prevent unauthorized parties from being able to read or modify the data in transit. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Select 'On' radio button for 'HTTPS Only' option.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-get-audit-config' AND json.rule = 'auditConfigs[*].service does not contain allServices or (auditConfigs[*].auditLogConfigs[*].exemptedMembers exists and auditConfigs[*].auditLogConfigs[*].exemptedMembers is not empty)'```,"GCP Project audit logging is not configured properly across all services and all users in a project This policy identifies the GCP projects in which cloud audit logging is not configured properly across all services and all users. It is recommended that cloud audit logging is configured to track all Admin activities and read, write access to user data. Logs should be captured for all users and there should be no exempted users in any of the audit config section. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. To read the project's IAM policy and store it in a file run a command:\ngcloud projects get-iam-policy [PROJECT_ID] > /tmp/policy.yaml\n2. Edit policy in /tmp/policy.yaml, adding or changing only the audit logs configuration to:\nauditConfigs:\n- auditLogConfigs:\n - logType: DATA_WRITE\n - logType: DATA_READ\nservice: allServices\nNote: Make sure 'exemptedMembers:' is not set, as audit logging should be enabled for all the users.\n3. To set audit config run:\ngcloud projects set-iam-policy [PROJECT_ID] /tmp/policy.yaml." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = dualAuthDelete does not exist or dualAuthDelete.enabled is false```,"IBM Cloud Key Protect Key dual authorization for deletion is not enabled This policy identifies IBM Cloud Key Protect Key that has dual authorization for deletion is disabled. Dual authorization for Key Protect service instances is an extra policy that helps to prevent accidental or malicious deletion of keys in your Key Protect instance. It is recommended that dual authorization for deletion of all keys in a Key Protect instance is enabled. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to IBM Cloud CLI\n2. For setting up the IBM cloud CLI for Key Protect, please refer to the below URL: \nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-set-up-cli#install-cli\n3. To change the region where the reported Key Protect instance is located, run the following IBM cloud CLI command:\nibmcloud target -r \n4. To enable dual authorization policy for your Key Protect instance key, run the following IBM cloud CLI command:\nibmcloud kp key policy-update dual-auth-delete --enable --instance-id \nReference: https://cloud.ibm.com/docs/key-protect?topic=key-protect-key-protect-cli-reference#kp-key-policy-update-dual\n 5. To enable dual authorization settings at the instance level, Please refer to the below URL.\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-manage-dual-auth." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-group' as X; config from cloud.resource where api.name = 'oci-iam-user' as Y; filter '($.X.name equals Administrators) and ($.X.groupMembers[*].userId contains $.Y.id) and ($.Y.apiKeys[*] size greater than 0)';show Y;```,"OCI tenancy administrator users are associated with API keys This policy identifies OCI users who are the members of Administrators group, has API keys associated. It is recommended not to allow OCI users with API keys to have direct tenancy access, to preserve privileged security principle. As a best practice, dissociate the API keys for the OCI Users of Administrators group and use Service-level administrative users with API keys instead. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu.\n3. Select Users from Identity menu.\n4. For each tenancy administrator user who has an API key, select API Keys from the menu in the lower left hand corner.\n5. Delete any associated keys from the API Keys table.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"DemoAggPolicy - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and transitEncryptionMode does not equal ignore case SERVER_AUTHENTICATION```,"GCP Memorystore for Redis instance does not use in transit encryption This policy identifies GCP Memorystore for Redis instances with no in transit encryption. GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. When in-transit encryption is disabled, all data transmitted between your clients and Redis flows as plaintext over the network, making it vulnerable to man-in-the-middle attacks and packet sniffing, potentially exposing sensitive information like session tokens, personal data, or business secrets. It is recommended to enable In transit encryption for GCP Memorystore for Redis to prevent malicious actors from intercepting sensitive data. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: In-transit encryption cannot be changed for existing Memorystore for Redis instances. A new Memorystore for Redis instance instance should be created.\n\nTo create a new Memorystore for Redis instance with In-transit encryption , please refer to the steps below:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Click on the 'CREATE INSTANCE'\n3. Provide all the other details as per the requirements\n4. Under 'Security', select the 'Enable in-transit encryption' checkbox\n5. Click on the 'CREATE INSTANCE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.encryption.requireInfrastructureEncryption does not exist or properties.encryption.requireInfrastructureEncryption is false)```,"Azure storage account infrastructure encryption is disabled The policy identifies Azure storage accounts for which infrastructure encryption is disabled. Infrastructure double encryption adds a second layer of encryption using service-managed keys. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice. Once at the service level and once at the infrastructure level - with two different encryption algorithms and two different keys. Infrastructure encryption is recommended for scenarios where double encrypted data is necessary for compliance requirements. It is recommended to enable infrastructure encryption on Azure storage accounts so that encryption can be implemented at the layer closest to the storage device or network wires. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Configuring Infrastructure double encryption for Azure Storage accounts is only allowed during storage account creation. Once the storage account is provisioned, you cannot change the storage encryption.\n\nTo create an Azure Storage account with Infrastructure double encryption, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/infrastructure-encryption-enable\n\nNOTE: Using Infrastructure double encryption will have performance impact on the read and write speeds of Azure storage accounts due to the additional encryption process.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = ingressSecurityRules[?any( isStateless is false )] exists```,"OCI VCN Security list has stateful security rules This policy identifies the OCI Virtual Cloud Networks (VCN) security lists that have stateful ingress rules configured in their security lists. It is recommended that Virtual Cloud Networks (VCN) security lists are configured with stateless ingress rules to slow the impact of a denial-of-service (DoS) attack. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Ingress rule where Stateless column is set to No\n5. Click on Edit\n6. Select the checkbox STATELESS\n7. Click on Save Changes." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(139,139) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on NetBIOS-SSN port (139) This policy identifies GCP Firewall rules which allow all inbound traffic on NetBIOS-SSN port (139). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the NetBIOS-SSN port (139) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-event-subscriptions' AND json.rule = 'sourceType equals db-security-group and ((status does not equal active or enabled is false) or (status equals active and enabled is true and (sourceIdsList is not empty or eventCategoriesList is not empty)))'```,"AWS RDS event subscription disabled for DB security groups This policy identifies RDS event subscriptions for which DB security groups event subscription is disabled. You can create an Amazon RDS event notification subscription so that you can be notified when an event occurs for given DB security groups. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS Dashboard\n4. Click on 'Event subscriptions' (Left Panel)\n5. Choose the reported Event subscription\n6. Click on 'Edit'\n7. On 'Edit event subscription' page, Under 'Details' section; Select 'Yes' for 'Enabled' and Make sure you have subscribed your DB to 'All instances' and 'All event categories'\n8. Click on 'Edit'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-regional-forwarding-rule' AND json.rule = target contains ""/targetHttpProxies/"" and loadBalancingScheme contains ""EXTERNAL""```","GCP public-facing (external) regional load balancer using HTTP protocol This policy identifies GCP public-facing (external) regional load balancers that are using HTTP protocol. Using the HTTP protocol with a GCP external load balancer transmits data in plaintext, making it vulnerable to eavesdropping, interception, and modification by malicious actors. This lack of encryption exposes sensitive information, increases the risk of man-in-the-middle attacks, and compromises the overall security and privacy of the data exchanged between clients and servers. It is recommended to use HTTPS protocol with external-facing load balancers. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Network Service' and then 'Load Balancing'\n3. Click on the 'FRONTENDS' tab\n4. Identify the frontend that is using the reported forwarding rule.\n5. Click on the load balancer name associated with the frontend identified above\n6. Click 'Edit'\n7. Go to 'Frontend configuration'\n8. Delete the frontend rule that allows HTTP protocol.\n9. Add new frontend rule(s) as required. Make sure to use HTTPS protocol instead of HTTP for new rules.\n10. Click 'Update'\n11. Click 'UPDATE LOAD BALANCER' in the pop-up.." ```config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = resources.applicationLoadBalancer[*] exists as X; config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = scheme equals internet-facing and type equals application as Y; filter 'not($.X.resources.applicationLoadBalancer[*] contains $.Y.loadBalancerArn)'; show Y;```,"AWS Application Load Balancer (ALB) not configured with AWS Web Application Firewall v2 (AWS WAFv2) This policy identifies AWS Application Load Balancers (ALBs) that are not configured with AWS Web Application Firewall v2 (AWS WAFv2). As a best practice, configure the AWS WAFv2 service on the application load balancers to protect against application-layer attacks. To block malicious requests to your application load balancers, define the block criteria in the WAFv2 web access control list (web ACL). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Make sure your the reported Application Load Balancer requires WAF based on your requirement and Note down the load balancer name.\n3. Navigate to WAF & Shield dashboard\n4. Click on Web ACLs, under AWS WAF section from left panel\n5. If Web ACL is not created; create a new Web ACL and add reported Application Load Balancer to Associated AWS resources.\n6. If you have Web ACL already created; Click on Web ACL and add your reported Application Load Balancer to Associated AWS resources.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5432,5432) or destinationPortRanges[*] contains _Port.inRange(5432,5432) ))] exists```","Azure Network Security Group allows all traffic on PostgreSQL (TCP Port 5432) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on PostgreSQL (TCP Port 5432). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict PostgreSQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and canIpForward is true and name does not start with ""gke-""```","GCP VM instances have IP Forwarding enabled This policy identifies VM instances that have IP Forwarding enabled. IP Forwarding could open unintended and undesirable communication paths and allows VM instances to send and receive packets with the non-matching destination or source IPs. To enable the source and destination IP match check, disable IP Forwarding. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP VM instances IP forwarding feature cannot be updated. After an instance is created, the IP forwarding field becomes read-only. So to fix this alert, Create a new VM instance with IP forwarding disabled, migrate all required data from reported VM to newly created and delete the VM instance reported.\n\nTo create a new VM Instance with IP forwarding disabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. Click the CREATE INSTANCE button\n5. Specify other instance parameters as you desire\n6. Click Management, disk, networking, SSH keys\n7. Click Networking\n8. Click on the specific Network interfaces\n9. Set IP forwarding to Off\n10. Click on Done\n11. Click on Create button\n\nTo Delete VM instance which has IP forwarding enabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Delete button." "```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = ""filter exists"" as X; count(X) less than 1```","GCP Log Entries without sinks configured This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireNumbers does not exist or requireNumbers is false'```,"Alibaba Cloud RAM password policy does not have a number This policy identifies Alibaba Cloud accounts that do not have a number in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Numbers'\n6. Click on 'OK'\n7. Click on 'Close'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.privateEndpointConnections[*] does not exist or properties.privateEndpointConnections[*] is empty or (properties.privateEndpointConnections[*] exists and properties.privateEndpointConnections[*].properties.privateLinkServiceConnectionState.status does not equal ignore case Approved))```,"Azure Machine learning workspace is not configured with private endpoint This policy identifies Azure Machine learning workspaces that are not configured with private endpoint. Private endpoints in workspace resources allow clients on a virtual network to securely access data over Azure Private Link. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Machine learning workspaces. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure Machine Learning' dashboard\n3. Click on the reported Azure Machine learning workspace\n4. Configure Private endpoint connections under 'Networking' from left panel.\n\nFor information refer:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-data-factory-v2' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity does not exist or identity.type equal ignore case ""None""```","Azure Data Factory (V2) is not configured with managed identity This policy identifies Data Factories (V2) that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Data Factory. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Data factories'\n3. Click on the reported Data factory\n4. Select 'Managed identities' under 'Settings' from left panel \n5. Configure either 'System assigned' or 'User assigned' identity\nFor more on Data factories managed identities refer https://docs.microsoft.com/en-gb/azure/data-factory/data-factory-service-identity?tabs=data-factory\n6. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```,"gfssrguptn_ui_auto_policies_tests_name njfeujtwmv_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains ""aws:kms"" or sseAlgorithm contains ""aws:kms:dsse"") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.keyState contains PendingDeletion as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn and $.Z.s3BucketName equals $.X.bucketName'; show X;```","AWS CloudTrail S3 bucket encrypted with Customer Managed Key (CMK) that is scheduled for deletion This policy identifies AWS CloudTrail S3 buckets encrypted with Customer Managed Key (CMK) that is scheduled for deletion. CloudTrail logs contain account activity related to actions across your AWS infrastructure. These log files stored in Amazon S3 are encrypted by AWS KMS keys. Deleting keys in AWS KMS that are used by CloudTrail is a common defense evasion technique and could be a potential ransomware attacker activity. After a key is deleted, you can no longer decrypt the data that was encrypted under that key, which helps the attacker to hide their malicious activities. It is recommended to regularly monitor the key used for encryption to prevent accidental deletion. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to cancel KMS CMKs which are scheduled for deletion used by the S3 bucket\n\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Navigate to Key Management Service (KMS).\n6. Click on 'Key actions' dropdown.\n7. Click on 'Cancel key deletion'.\n8. Click on 'Enable'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = (origins.items[*] contains ""customOriginConfig"") and (origins.items[?(@.customOriginConfig.originSslProtocols.items)] contains ""SSLv3"")```","AWS CloudFront distribution is using insecure SSL protocols for HTTPS communication CloudFront, a content delivery network (CDN) offered by AWS, is not using a secure cipher for distribution. It is a best security practice to enforce the use of secure ciphers TLSv1.0, TLSv1.1, and/or TLSv1.2 in a CloudFront Distribution's certificate configuration. This policy scans for any deviations from this practice and returns the results. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Communication between CloudFront and your Custom Origin should enforce the use of secure ciphers. Modify the CloudFront Origin's Origin SSL Protocol to include TLSv1.0, TLSv1.1, and/or TLSv1.2.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Origins' tab.\n4. Check the origin you want to modify then select Edit.\n5. Remove (uncheck) 'SSLv3' from Origin SSL Protocols.\n6. Select 'Yes, Edit.'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","Critical of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-batch-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.networkProfile.accountAccess.defaultAction equal ignore case Allow and properties.publicNetworkAccess equal ignore case Enabled```,"Azure Batch Account configured with overly permissive network access This policy identifies Batch Accounts configured with overly permissive network access. By default, Batch accounts are accessible from all the networks. With an Account access IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. With Private access Virtual Networks, the network traffic path is secured on both ends. It is recommended to configure the Batch account with an IP firewall or by Virtual Network, so that the Batch account is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure private access private endpoint, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/private-connectivity#azure-portal\n\nTo disable public network, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/public-network-access#disable-public-network-access\n\nIf Batch account is intended access from public network, restrict it to specific IP ranges. To allow public network access with specific network rules, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/public-network-access#access-from-selected-public-networks." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = listeners[?any( protocol does not equal ignore case https AND https_redirect does not exist )] exists```,"IBM Cloud Application Load Balancer for VPC not configured with HTTPS Listeners This policy identifies IBM Cloud Application Load Balancers for VPC that has different listener protocol instead of HTTPS. HTTPS listeners uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS listeners for additional security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancers' reported in the alert\n4. Under 'Front-end listeners' tab, click on three dots on the right corner of a row containing listener with protocol other than HTTPS. Then click on 'Edit'.\n5. If the protocol is 'TCP', please delete the listener by clicking on three dots on the right corner. Then click on 'Delete'.\n6. Click on 'Create listener'.\n7. In the 'Edit front-end listener' screen, select 'HTTPS' from the 'Protocol' dropdown.\n8. Under 'Secrets Manager' please select an instance and select an SSL 'Certificate'. Make sure that the load balancer is authorised to access the SSL certificate.\n9. Click on 'Save'." ```config from cloud.resource where api.name = 'alibaba-cloud-rds-instance' as X; config from cloud.resource where api.name = 'alibaba-cloud-vpc' as Y; filter '$.X.vpcId equals $.Y.vpcId and $.Y.isDefault is true'; show X;```,"Alibaba Cloud ApsaraDB RDS instance is using the default VPC This policy identifies ApsaraDB RDS instances which are configured with the default VPC. It is recommended to use a VPC configuration based on your security and networking requirements. You should create your own RDS instance VPC instead of using the default so that you can have full control over the RDS network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: NOTE: The VPC switching process will interrupt the availability of your instance for 30 seconds. Make sure that your application is configured with automatic reconnection policies.\n\n1. Log in to Alibaba Cloud Portal\n2. Go to ApsaraDB for RDS\n3. In the left navigation pane, click on 'Instances'\n4. Click on the reported instance\n5. In the left navigation pane, click on 'Database Connection'\n6. In the 'Database Connection' section, click on 'Switch VSwitch'\n7. On the 'Switch VSwitch' popup window, Choose custom VPC and Virtual Switch instead of default VPC from the 'Switch To' dropdown list.\n8. Click on OK\n9. Read the Notes properly and make sure all necessary actions are taken and then Click on 'Switch'." "```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource does not equal ignore case ""Microsoft.Keyvault"" as X; config from cloud.resource where api.name = 'azure-log-analytics-linked-storage-accounts' AND json.rule = properties.dataSourceType equal ignore case Query as Y; filter '$.X.id contains $.Y.properties.storageAccountIds'; show X;```","Azure Log analytics linked storage account is not configured with CMK encryption This policy identifies Azure Log analytics linked Storage accounts which are not encrypted with CMK. By default Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts linked Log analytics for better control on the data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure encryption using CMK for existing Azure Log analytics linked storage account, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/logs/private-storage#customer-managed-key-data-encryption." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cdn-endpoint' AND json.rule = properties.customDomains[?any( properties.customHttpsProvisioningState equals Enabled and properties.customHttpsParameters.minimumTlsVersion equals TLS10 )] exists```,"Azure CDN Endpoint Custom domains using insecure TLS version This policy identifies Azure CDN Endpoint Custom domains which has insecure TLS version. TLS 1.2 resolves the security gap from its preceding versions. As a best security practice, use TLS 1.2 as the minimum TLS version for Azure CDN Endpoint Custom domains. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to 'CDN profiles'\n3. Choose the reported each 'CDN Endpoints' under each 'CDN profiles'\n4. Under 'Settings' section, Click on 'Custom domains'\n5. Select the 'Custom domain' for which you need to set TLS version\n6. Under 'Configure' select 'TLS 1.2' for 'Minimum TLS version'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = config.ftpsState equals AllAllowed```,"Azure App Services FTP deployment is All allowed This policy identifies Azure App Services which has FTP deployment setting as All allowed. Attacker could listen to wifi traffic and get the login credentials to a FTP deployments which could be in plain text and get full control of the code base of the app or service. It is highly recommend to use FTPS if FTP deployment for workflow is essential else disable the FTP deployment for Azure App Services. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Following recommendation steps are for resources hosted in App Service, Premium and Windows Consumption plans,\n\n1. Log in to the Azure Portal\n2. Select 'App Services' from the left pane\n3. Select the reported App Services\n4. Go to 'Configurations' under 'Settings'\n5. Click on 'General settings'\n6. Select 'FTPS only' or 'Disabled' for 'FTP state' under 'Platform settings'\n7. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\n\nAzure CLI Command \nFTP Disable - \""az functionapp config set --ftps-state Disabled --name MyFunctionApp --resource-group MyResourceGroup\""\n\nFTPS only - \""az functionapp config set --ftps-state FtpsOnly --name MyFunctionApp --resource-group MyResourceGroup\""." ```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = '($.logging does not exist or $.logging equals null) and ($.acl[*].email exists and $.acl[*].email contains logging)'```,"GCP Bucket containing Operations Suite Logs have bucket logging disabled This policy identifies the buckets containing Operations Suite Logs for which logging is disabled. Enabling bucket logging, logs all the requests made on the bucket which can be used for debugging and forensics. It is recommended to enable logging on the buckets containing Operations Suite Logs. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable logging for a bucket:\n\nhttps://cloud.google.com/storage/docs/access-logs." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-replication-instance' AND json.rule = replicationInstanceStatus equals ""available"" and autoMinorVersionUpgrade is false```","AWS DMS replication instance automatic version upgrade disabled This policy identifies the AWS DMS(Database Migration Service) replication instances that do not have auto minor version upgrade feature enabled A replication instance in DMS is a compute resource used to replicate data between a source and target database during the migration or ongoing replication process. Failure to enable automatic minor upgrades can leave your database instances vulnerable to security risks stemming from outdated software. It is recommended to enable automatic minor version upgrades on DMS replication instances to receive timely patches and updates, reduce the risk of security vulnerabilities and improve overall performance and stability. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify an AWS DMS(Database Migration Service) Replication Instance's Automatic version upgrade using the AWS console, follow these steps:\n\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region dropdown in the top right corner, for which the alert is generated.\n3. Go to the DMS console by either searching for 'DMS' in the AWS services search bar or navigating directly to the DMS service.\n4. From the navigation pane on the left, select 'Replication Instances' under the 'Migrate data' section.\n5. Select the replication instance that is reported and select 'Modify' from the 'Action' dropdown in the right corner.\n6. Under the 'Maintenance' section, choose the 'Yes' option for the 'Automatic version upgrade'.\n7. Under the 'When to apply the modifications' section, choose 'Apply immediately' or 'Apply during the next scheduled maintenance window' according to your business requirements.\n8. Click 'Save' to save the changes.." "```config from cloud.resource where api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals ""READY"" as X; config from cloud.resource where api.name = 'aws-network-firewall-logging-configuration' AND json.rule = LoggingConfiguration.LogDestinationConfigs[*].LogType does not exist as Y; filter '$.X.Firewall.FirewallArn equal ignore case $.Y.FirewallArn' ; show X;```","AWS Network Firewall is not configured with logging configuration This policy identifies an AWS Network Firewall where logging is not configured. AWS Network Firewall manages inbound and outbound traffic for the AWS resources within the AWS environment. Logging configuration for the network firewall involves enabling logging of network traffic, including allowed and denied requests, to provide visibility into network activity. Failure to configure logging results in a lack of visibility into potential security threats, making it difficult to detect and respond to malicious activity effectively and hindering threat detection and compliance. It is recommended to enable logging to ensure comprehensive monitoring, threat detection, compliance adherence, and effective incident response. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update a firewall's logging configuration through the console, Perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to the VPC Dashboard\n4. In the navigation pane, Under 'Network Firewall', choose 'Firewalls'\n5. On the Firewalls page, select the reported firewall\n6. In the 'Firewall details' tab, under the 'Logging' section, click on 'Edit'\n5. Select the Log type as needed for your requirement. You can configure logging for alert and flow logs.\n\nAlert – Sends logs for traffic that matches any stateful rule whose action is set to Alert or Drop. For more information about stateful rules and rule groups, see Rule groups in AWS Network Firewall.\n\nFlow – Sends logs for all network traffic that the stateless engine forwards to the stateful rules engine.\n\n6. For each selected log type, choose the destination type, then provide the information for the logging destination that you prepared following the guidance in Firewall logging destinations.\n7. Choose 'Save' to save your changes and return to the firewall's detail page.." ```config from cloud.resource where api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE and deletionProtectionEnabled is false```,"AWS DynamoDB table deletion protection is disabled This policy identifies AWS DynamoDB tables with deletion protection disabled. DynamoDB is a fully managed NoSQL database that provides a highly reliable, scalable, low-latency database solution for applications that require consistent, single-digit millisecond latency at any scale. Deletion protection feature allows authorised administrators to prevent accidental deletion of DynamoDB tables. Enabling deletion protection helps reduce the risk of data loss, maintain data integrity, ensure compliance, and protect DynamoDB tables across different environments. It is recommended to enable deletion protection on DynamoDB tables to prevent unintended data loss. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable a Dynamodb table with deletion protection, follow these steps:\n\n1. Sign into the AWS console and navigate to the DynamoDB console.\n2. In the navigation pane, under 'Tables', locate the table you want to enable deletion protection for and select it.\n3. In the table details page, under the 'Additional settings' tab, go to the 'Deletion protection' section and click on 'Turn on'.\n4. Under the confirmation screen, click on 'Confirm'.." ```config from cloud.resource where api.name = 'aws-ec2-describe-network-interfaces' AND json.rule = association.allocationId exists```,"amtest-eni This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = 'networkRuleSet.defaultAction equals Allow'```,"Azure Storage Account default network access is set to 'Allow' This policy identifies Storage accounts which have default network access is set to 'Allow'. Restricting default network access helps to provide a new layer of security, since storage accounts accept connections from clients on any network. To limit access to selected networks, the default action must be changed. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the default network access rule, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#change-the-default-network-access-rule." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = imageTagMutability equal ignore case mutable```,"AWS ECR private repository tag mutable This policy identifies AWS ECR private repositories whose tag immutability is not configured. AWS Elastic Container Registry (ECR) tag immutability ensures that once an image is pushed to a repository with tag immutability enabled, the tag cannot be overwritten or updated. This feature is useful for ensuring the security, integrity, and reliability of container images in production environments. It prevents tags from being overwritten, which can help prevent unauthorised changes to images. It is recommended to enable tag immutability on ECR repositories to maintain the integrity and security of the images pushed. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable tag immutability for an ECR repository, follow the below steps:\n\n1. Log into the AWS console and navigate to the ECR dashboard.\n2. In the navigation pane, choose 'Repositories' under 'Private registry'.\n3. Select the repository you want to edit and choose 'Edit' from the 'Actions' dropdown.\n4. Make 'Tag immutability' to 'enabled'.\n5. Choose 'Save'.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(management.autoRepair does not exist or management.autoRepair is false)] exists```,"GCP Kubernetes cluster node auto-repair configuration disabled This policy identifies GCP Kubernetes cluster nodes with auto-repair configuration disabled. GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Click on the alerted cluster and go to section 'Node pools'\n4. Click on a node pool to ensure Auto repair' is enabled in the 'Management' section\n5. To modify click on the 'Edit' button at the top\n6. To enable the configuration click on the check box against 'Enable auto-repair'\n7. Click on 'Save'\n8. Repeat Step 4-7 for each node pool associated with the reported cluster." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.ageInDays($.X.properties.updatedOn) < 80) and (($.X.properties.principalId contains $.Y.id)))'; show X; addcolumn properties.roleDefinition.properties.roleName```,"llatorre - RoleAssignment v1 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Reach out to llatorre@paloaltonetworks.com." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-endpoint' AND json.rule = status equals active and (endpointType equals SOURCE and sslMode equals none and engineName is not member of (""s3"", ""azuredb"")) or (endpointType equals TARGET and sslMode equals none and engineName is not member of (""dynamodb"", ""kinesis"", ""neptune"", ""redshift"", ""s3"", ""elasticsearch"", ""kafka""))```","AWS Database Migration Service endpoint do not have SSL configured This policy identifies Database Migration Service (DMS) endpoints that are not configured with SSL to encrypt connections for source and target endpoints. It is recommended to use SSL connection for source and target endpoints; enforcing SSL connections help protect against 'man in the middle' attacks by encrypting the data stream between endpoint connections. NOTE: Not all databases use SSL in the same way. An Amazon Redshift endpoint already uses an SSL connection and does not require an SSL connection set up by AWS DMS. So there are some exlcusions included in policy RQL to report only those endpoints which can be configured using DMS SSL feature. For more details: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.SSL This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the AWS DMS dashboard\n3. In the navigation pane, choose 'Endpoints'\n4. Select the reported DMS endpoint\n5. Under 'Actions', choose 'Modify'\n6. In the 'Endpoint configuration' section, select the 'Secure Socket Layer (SSL) mode' from the dropdown list select suitable SSL mode according to your requirement other than 'none'.\n7. Click on 'Save'\n\nNOTE: Before modifying the SSL setting, you should be configured with the proper certificate you want to use for SSL connection under the DMS 'Certificate' service.." "```config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects (""all-accounts"")```","jashah_ms_config This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equal ignore case ""RUNNING"" and (machineType contains ""machineTypes/n2d-"" or machineType contains ""machineTypes/c2d-"" or machineType contains ""machineTypes/c3d-"" or machineType contains ""machineTypes/c3-standard-"")and (disks[*].guestOsFeatures[*].type contains ""SEV_CAPABLE"" or disks[*].guestOsFeatures[*].type contains ""SEV_LIVE_MIGRATABLE_V2"" or disks[*].guestOsFeatures[*].type contains ""SEV_SNP_CAPABLE"" or disks[*].guestOsFeatures[*].type contains ""TDX_CAPABLE"") and (confidentialInstanceConfig.enableConfidentialCompute does not exist or confidentialInstanceConfig.enableConfidentialCompute is false)```","GCP VM instance Confidential VM service disabled This policy identifies GCP VM instances that have Confidential VM service disabled. GCP VM encrypts data at rest and in transit, but the data must be decrypted before processing. Confidential VM service (Confidential Computing) allows GCP VM to keep in-memory data secure by utilizing hardware-based memory encryption. This protects any sensitive data leakage in case the VM is compromised. It is recommended to enable Confidential VM service on GCP VMs to enhance the confidentiality and integrity of in-memory data on the VMs. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Confidential VM services cannot be enabled for existing VM instances. A new VM should be created to enable confidential VM services on the instance.\n\nTo create a new VM instance with confidential VM services enabled, please refer to the steps below:\n1. Login to the GCP console\n2. Under 'Compute Engine', navigate to the 'VM instances' (Left Panel)\n3. Click on 'Create instance'\n4. Navigate to 'Security' section, Click Enable under 'Confidential VM service'.\n5. In the Enable Confidential Computing dialog, review the list of settings updated when you enable the service, and then click 'Enable'.\n6. Review other settings for the VM instance.\n7. Click 'Create'.\n\nNote: For the list of supported VM configurations for confidential VM services, please refer to the URL given below: https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations." "```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code contains active and listeners[?any( protocol is member of (HTTP,TCP,UDP,TCP_UDP) and defaultActions[?any( redirectConfig.protocol contains HTTPS)] does not exist )] exists as X; config from cloud.resource where api.name = 'aws-elbv2-target-group' AND json.rule = targetType does not equal alb and protocol exists and protocol is not member of ('TLS', 'HTTPS') as Y; filter '$.X.listeners[?any( protocol equals HTTP or protocol equals UDP or protocol equals TCP_UDP )] exists or ( $.X.listeners[*].protocol equals TCP and $.X.listeners[*].defaultActions[*].targetGroupArn contains $.Y.targetGroupArn)'; show X;```","AWS Elastic Load Balancer v2 (ELBv2) with listener TLS/SSL is not configured This policy identifies AWS Elastic Load Balancers v2 (ELBv2) which have non-secure listeners. As Load Balancers will be handling all incoming requests and routing the traffic accordingly. The listeners on the load balancers should always receive traffic over secure channel with a valid SSL certificate configured. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Listeners tab, Click the 'Edit' button under the available listeners\n7. In the Load Balancer Protocol type is application select the listener protocol as 'HTTPS (Secure HTTP)' or If the load balancer type is network, select the listener protocol as TLS\n8. Select appropriate 'Security policy' \n9. In the SSL Certificate column, click 'Change'\n10. On 'Select Certificate' popup dialog, Choose a certificate from ACM or IAM or upload a new certificate based on requirement and Click on 'Save'\n11. Back to the 'Edit listeners' dialog box, review the secure listeners configuration, then click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster' AND json.rule = lifecycleState equal ignore case ACTIVE and options.admissionControllerOptions.isPodSecurityPolicyEnabled is false```,"OCI Kubernetes Engine Cluster pod security policy not enforced This policy identifies Kubernetes Engine Clusters that are not enforced with pod security policy. The Pod Security Policy defines a set of conditions that pods must meet to be accepted by the cluster; when a request to create or update a pod does not meet the conditions in the pod security policy, that request is rejected and an error is returned. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Pod Security Policies for Container Engine for Kubernetes, refer below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengusingpspswithoke.htm\n\nNOTE: You must define pod security policies for the pod security policy admission controller to enforce when accepting pods into the cluster. If you do not define pod security polices, the pod security policy admission controller will prevent any pods being created in the cluster.." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action contains iam:CreatePolicyVersion or Action contains iam:SetDefaultPolicyVersion or Action contains iam:PassRole or Action contains iam:CreateAccessKey or Action contains iam:CreateLoginProfile or Action contains iam:UpdateLoginProfile or Action contains iam:AttachUserPolicy or Action contains iam:AttachGroupPolicy or Action contains iam:AttachRolePolicy or Action contains iam:PutUserPolicy or Action contains iam:PutGroupPolicy or Action contains iam:PutRolePolicy or Action contains iam:AddUserToGroup or Action contains iam:UpdateAssumeRolePolicy or Action contains iam:*))] exists```,"AWS IAM Policy permission may cause privilege escalation This policy identifies AWS IAM Policy which have permission that may cause privilege escalation. AWS IAM policy having weak permissions could be exploited by an attacker to elevate privileges. It is recommended to follow the principle of least privileges ensuring that AWS IAM policy does not have these sensitive permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Refer to the following URL to remove below listed weak permissions from reported AWS IAM Policies,\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#remove-policies-console\n\nBelow are the permission which can lead to privilege escalation,\niam:CreatePolicyVersion\niam:SetDefaultPolicyVersion\niam:PassRole\niam:CreateAccessKey\niam:CreateLoginProfile\niam:UpdateLoginProfile\niam:AttachUserPolicy\niam:AttachGroupPolicy\niam:AttachRolePolicy\niam:PutUserPolicy\niam:PutGroupPolicy\niam:PutRolePolicy\niam:AddUserToGroup\niam:UpdateAssumeRolePolicy\niam:*." "```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(80,80) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on HTTP port (80) This policy identifies GCP Firewall rules which allow all inbound traffic on HTTP port (80). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the HTTP port (80) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-sql-instances-list' and json.rule = ""(settings.ipConfiguration.sslMode equal ignore case TRUSTED_CLIENT_CERTIFICATE_REQUIRED and _DateTime.ageInDays(serverCaCert.expirationTime) > -1) or settings.ipConfiguration.sslMode equal ignore case ALLOW_UNENCRYPTED_AND_ENCRYPTED""```","GCP SQL Instances do not have valid SSL configuration This policy identifies GCP SQL instances that either lack SSL configuration or have SSL certificates that have expired. If an SQL instance is not configured to use SSL, it may accept unencrypted and insecure connections, leading to potential risks such as data interception and authentication vulnerabilities. It is a best practice to enable SSL configuration to ensure data security and integrity when communicating with a GCP SQL instance. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SQL instance with SSL configuration,follow the steps mentioned below:\n\n1. Log in to google cloud console\n2. Navigate to 'Cloud SQL Instances'\n3. Click on the alerted instance and navigate to 'Security' under 'Connections' tab\n4. Select one of the following under 'Manage SSL mode':\n i. Allow only SSL connections\n ii. Require trusted client certificates\n\nTo verify the validity of the current certificate, follow the steps mentioned below:\n\n1. Log in to google cloud console\n2. Navigate to 'Cloud SQL Instances'\n3. Click on the alerted instance and navigate to 'Security' under 'Connections' tab\n4. Verify the expiration date of your server certificate under 'Manage server CA certificates' table\n\nTo create a new client certificate, follow the URL mentioned: https://cloud.google.com/sql/docs/mysql/configure-ssl-instance#client-certs." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.creategroup and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deletegroup and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updategroup) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for IAM group changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM group changes. Monitoring and alerting on changes to IAM group will help in identifying changes to satisfy the least privilege principle. It is recommended that an Event Rule and Notification be configured to catch changes made to IAM group. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Group – Create, Group – Delete and Group – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = deployment.enable_public_endpoints is true```,"IBM Cloud Database PostgreSQL is exposed to public The policy identifies IBM Cloud Database PostgreSQL instances exposed to the public via public endpoints. When provisioning an IBM Cloud database service, it is generally not recommended to use public endpoints because it can pose a security risk. Public endpoints can make your database accessible to anyone with internet access, potentially leaving your data vulnerable to unauthorized access or malicious attacks. Instead, it is recommended to use private endpoints when provisioning a database service in IBM Cloud. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Refer to the IBM documentation to change the service endpoints from public to private\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-service-endpoints." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-services-list' AND json.rule = services[?any( name ends with ""/cloudasset.googleapis.com"" and state equals ""ENABLED"" )] does not exist```","GCP Cloud Asset Inventory is disabled This policy identifies GCP accounts where GCP Cloud Asset Inventory is disabled. GCP Cloud Asset Inventory is a metadata inventory service that allows you to view, monitor, and analyze Google Cloud and Anthos assets across projects and services. This data can prove to be crucial in security analysis, resource change tracking, and compliance auditing. It is recommended to enable GCP Cloud Asset Inventory for centralized visibility and control over your cloud assets. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Under 'APIs and Services', navigate to the 'API Library' (Left Panel)\n3. Search and select 'Cloud Asset API'\n4. Click 'ENABLE'.." ```config from cloud.resource where api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.policies.azureADAuthenticationAsArmPolicy.status contains enabled```,"Azure Container Registry with ARM audience token authentication enabled This policy identifies Azure Container Registries that permit ARM audience tokens for authentication. When ARM audience tokens are enabled, they allow authentication intended for broader Azure services, which could introduce potential security risks. Disabling ARM audience tokens ensures that only ACR-specific tokens are valid, enhancing security by limiting authentication exclusively to Azure Container Registry audience tokens. As a security best practice, it is recommended to disable ARM audience tokens for Azure Container Registries. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable ARM audience tokens for Azure Container Registries, refer to the following link:\nhttps://learn.microsoft.com/en-us/azure/container-registry/container-registry-disable-authentication-as-arm#assign-a-built-in-policy-definition-to-disable-arm-audience-token-authentication---azure-portal." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress')].sourceCidrIp contains 0.0.0.0/0""```","Alibaba Cloud Security group is overly permissive to all traffic This policy identifies Security groups that are overly permissive to all traffic. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' as X; count(X) less than 1 ```,"test_aggr_pk This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-flexible-server' AND json.rule = properties.state equal ignore case Ready and require_secure_transport.value does not equal ignore case on```,"Azure PostgreSQL flexible server secure transport parameter is disabled This policy identifies PostgreSQL flexible servers for which secure transport (SSL connectivity) parameter is disabled. Secure transport (SSL connectivity) helps to provide a new layer of security, by connecting server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. As a security best practice, it is recommended to enable secure transport parameter for Azure PostgreSQL flexible server. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Azure Database for PostgreSQL flexible server'\n3. Click on the reported PostgreSQL flexible server\n4. Navigate to Settings -> Server parameters\n5. Search for parameter 'require_secure_transport' and set VALUE to 'ON' and You can also set min TLS version by setting 'ssl_min_protocol_version' server parameter as per your business requirement.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = 'properties.powerState.code equal ignore case Running and properties.agentPoolProfiles[?any(type equal ignore case AvailabilitySet and count less than 3)] exists'```,"Azure AKS cluster pool profile count contains less than 3 nodes This policy identifies AKS clusters that are configured with node pool profile less than 3 nodes. It is recommended to have at least 3 or more than 3 nodes in a node pool for a more resilient cluster. (Clusters smaller than 3 may experience downtime during upgrades.) This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To scale AKS cluster node pool nodes count, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/aks/scale-cluster?tabs=azure-cli." "```config from cloud.resource where cloud.service = 'AWS Auto Scaling' AND api.name = 'aws-describe-auto-scaling-groups' AND json.rule = createdTime does not contain ""foo""```","Automation Audit Log Cron BUVZK Policy Automation Audit Log Policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and listeners.* is not empty and listeners.*.sslConfiguration.certificateName is empty and listeners.*.protocol does not equal ignore case HTTP```,"OCI Load balancer listener is not configured with SSL certificate This policy identifies Load balancers for which the listener is not configured with an SSL certificate. Enforcing an SSL connection helps prevent unauthorized users from reading sensitive data that is intercepted as it travels through the network, between clients/applications and cache servers. It is recommended to implement SSL between the load balancer and your client; so that the load balancer can accept encrypted traffic from a client. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SSL to your Load balancer listener follow below URLs details:\nFor adding certificate - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/create_certificate.htm\n\nFor editing listener - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managinglisteners_topic-Editing_Listeners.htm." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and (identity.type does not exist or identity.type equal ignore case None)```,"Azure Cognitive Services account is not configured with managed identity This policy identifies Azure Cognitive Services accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Cognitive Services account. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure AI services'\n3. Click on the reported Azure AI service\n4. Select 'Identity' under 'Resource Management' from left panel\n5. Configure either System assigned or User assigned identity\n6. Click on Save." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = launchOptions.isPvEncryptionInTransitEnabled is false```,"OCI Compute Instance boot volume has in-transit data encryption is disabled This policy identifies the OCI Compute Instances that are configured with disabled in-transit data encryption boot or block volumes. It is recommended that Compute Instance boot or block volumes should be configured with in-transit data encryption to minimize risk for sensitive data being leaked. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Edit\n5. Click on Show Advanced Options\n6. Select USE IN-TRANSIT ENCRYPTION\n7. Click Save Changes\n\nNote : To update the instance properties, the instance must be rebooted.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = '((_DateTime.ageInDays($.properties.updatedOn) < 60) and (properties.principalType contains User) and (properties.scope starts with""/subscriptions""))' addcolumn properties.roleDefinition.properties.roleName properties.roleDefinition.properties.type properties.principalId properties.updatedBy```","llatorre - RoleAssigment v4 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Go to investigate and identify the user that was assigned this role:\nconfig from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = id contains \n\nGo to investigate and identify who assigned this role:\nconfig from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = id contains ." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or minimumPasswordLength < 16 or minimumPasswordLength does not exist'```,"AWS IAM password policy does not have a minimum of 16 characters This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(config.shieldedInstanceConfig.enableSecureBoot does not exist or config.shieldedInstanceConfig.enableSecureBoot is false)] exists```,"GCP Kubernetes cluster shielded GKE node with Secure Boot disabled This policy identifies GCP Kubernetes cluster shielded GKE nodes with Secure Boot disabled. An attacker may seek to alter boot components to persist malware or rootkits during system initialization. It is recommended to enable Secure Boot for Shielded GKE Nodes to verify the digital signature of node boot components. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Once a Node pool is provisioned, it cannot be updated to enable Secure Boot. You must create new Node pools within the cluster with Secure Boot enabled. You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool, then delete the non-conforming pools.\n\nTo create a nodepool with Secure Boot enabled follow the below steps,\n\n1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. Select the alerted cluster and click 'ADD NODE POOL'\n4. Ensure that the 'Enable secure boot' checkbox is checked under the ‘Shielded options' in section 'Security'\n5. Click on 'CREATE'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.publicAccessPrevention does not equal ignore case ""enforced"" and iam.bindings[*] size greater than 0 and iam.bindings[*].members[*] any equal allUsers```","GCP Storage buckets are publicly accessible to all users This policy identifies the buckets which are publicly accessible to all users. Enabling public access to Storage buckets enables anybody with a web association to access sensitive information that is critical to business. Access over a whole bucket is controlled by IAM. Access to individual objects within the bucket is controlled by its ACLs. This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: To remove public access from a bucket, either enable ""Public access prevention"" or edit/remove any permissions granted to 'allUsers' on a bucket.\n\nTo edit/remove permissions granted over the bucket, follow the instructions below:\n1. Login to GCP Portal\n2. Go to the Cloud Storage Buckets page.\n3. Go to Buckets\n4. Click on the Storage bucket for which alert has been generated\n5. Select the Permissions tab near the top of the page.\n6. Edit/remove any permissions granted to 'allUsers'\n \nTo prevent public access over the bucket, follow the instructions below:\n1. Login to GCP Portal\n2. Go to the Cloud Storage Buckets page.\n3. Go to Buckets\n4. Click on the Storage bucket for which alert has been generated\n5. Select the Permissions tab near the top of the page.\n6. In the Public access card, click ""Prevent public access"" to enforce public access prevention.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = restrict_create_platform_apikey does not equal ""RESTRICTED""```","IBM Cloud API key creation is not restricted in account settings This policy identifies IBM cloud accounts where API key creation is not restricted in account settings. By default, all members of an account can create API keys. Enabling API key creation will restrict the users from creating API keys unless correct access is granted explicitly. It is recommended to enable API key creation setting and grant access only on a need basis. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable the API key creation setting:\n\nhttps://cloud.ibm.com/docs/account?topic=account-allow-api-create&interface=ui#allow-all-api-create." ```config from cloud.resource where api.name = 'gcloud-iam-service-accounts-keys-list' as X; config from cloud.resource where api.name = 'gcloud-iam-service-accounts-list' as Y; filter '($.X.name does not contain prisma-cloud and $.X.name contains iam.gserviceaccount.com and $.X.name contains $.Y.email and $.X.keyType contains USER_MANAGED)' ; show X;```,"GCP User managed service accounts have user managed service account keys This policy identifies user managed service accounts that use user managed service account keys instead of Google-managed. For user-managed keys, the User has to take ownership of key management activities. Even after owner precaution, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in downloads directory or accidentally leaving them on support blogs/channels. So It is recommended to limit the use of User-managed service account keys and instead use Google-managed keys which cannot be downloaded. Note: This policy might alert the service accounts which are not created using Terraform for cloud account onboarding. These alerts are valid as no user-managed service account should be used for cloud account onboarding. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to delete user managed service account keys:\n\nhttps://cloud.google.com/iam/docs/creating-managing-service-account-keys#deleting." ```config from cloud.resource where api.name = 'aws-elasticache-cache-clusters' as X; config from cloud.resource where api.name = 'aws-elasticache-describe-replication-groups' as Y; filter '$.Y.memberClusters contains $.X.cacheClusterId and $.X.cacheClusterStatus equals available and ($.X.cacheSubnetGroupName is empty or $.X.cacheSubnetGroupName does not exist)'; show Y;```,"AWS ElastiCache cluster not associated with VPC This policy identifies ElastiCache Clusters which are not associated with VPC. It is highly recommended to associate ElastiCache with VPC, as provides virtual network in your own logically isolated area and features such as selecting IP address range, creating subnets, and configuring route tables, network gateways, and security settings. NOTE: If you created your AWS account before 2013-12-04, you might have support for the EC2-Classic platform in some regions. AWS has deprecated the use of Amazon EC2-Classic for launching ElastiCache clusters. All current generation nodes are launched in Amazon Virtual Private Cloud only. So this policy only applies legacy ElastiCache clusters which are created using EC2-Classic. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: AWS ElastiCache cluster VPC association can be done only at the time of the creation of the cluster. So to fix this alert, create a new cluster with VPC, then migrate all required ElastiCache cluster data from the reported ElastiCache cluster to this newly created cluster and delete reported ElastiCache cluster.\n\nTo create new ElastiCache cluster with at-rest encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis or Memcached based on your requirement\n5. Choose cluster parameters as per your requirement\n6. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\n7. Select desired VPC for 'Subnet group' along with other parameters\nNOTE: If you don't specify a subnet when you launch a cluster, the cluster launches into your default Amazon VPC.\n8. Click on 'Create' button to launch your new ElastiCache cluster\n\nTo delete reported ElastiCache cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Select reported cluster\n5. Click on 'Delete' button\n6. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = (properties.publicNetworkAccess does not equal ignore case disabled and properties.networkAcls does not exist) or (properties.publicNetworkAccess does not equal ignore case disabled and properties.networkAcls.defaultAction equal ignore case allow ) ```,"Azure Key Vault Firewall is not enabled This policy identifies Azure Key Vault which has Firewall disabled. Enabling Azure Key Vault Firewall feature prevents unauthorised traffic from reaching your key vault. It is recommend to enable Azure Key Vault Firewall which provides additional layer of protection for your secrets. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Under 'Settings' select 'Networking'\n4. In order to ""Allow public access from specific virtual networks and IP addresses"", Click on 'Allow public access from specific virtual networks and IP addresses' Under 'Firewalls and virtual networks'. Add 'IPv4 address or CIDR'.\n5. In order to disable public access, Click on 'Disable public access'.\n6. Click on 'Save'.." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as X; config from cloud.resource where api.name = 'gcloud-cloud-run-revisions-list' AND json.rule = spec.serviceAccountName contains ""compute@developer.gserviceaccount.com"" as Y; filter ' $.X.user equals $.Y.spec.serviceAccountName '; show Y;```","GCP Cloud Run service revision is using default service account with editor role This policy identifies GCP Cloud Run service revisions that are utilizing the default service account with the editor role. GCP Compute Engine Default service account is automatically created upon enabling the Compute Engine API. This service account is granted the IAM basic Editor role by default, unless explicitly disabled. Assigning default service account with the editor role to cloud run revisions could lead to privilege escalation. Granting minimal access rights helps in promoting a better security posture. Following the principle of least privileges, it is recommended to avoid assigning default service account with the editor role to cloud run revision. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Changing a service account of an existing cloud run service revision is impossible. The service revision can be deleted and a new revision with appropriate permissions can be deployed.\n\nTo delete a cloud run service revision that is serving traffic, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Cloud Run' service\n3. Click on the cloud run service on whose revision, alert is generated\n4. Go to the 'REVISIONS' tab\n5. Click on 'MANAGE TRAFFIC'\n6. Click on the delete icon in front of the alerting revision. Adjust traffic distribution appropriately.\n7. Click on 'Save'\n8. Under the 'REVISIONS' tab, click the actions button (three dots) in front of the alerting revision.\n9. Click 'Delete'\n10. Click 'DELETE'\n\nTo delete a cloud run service revision that is not serving any traffic, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Cloud Run' service\n3. Click on the cloud run service on whose revision, alert is generated\n4. Go to the 'REVISIONS' tab\n5. Under the 'REVISIONS' tab, click the actions button (three dots) in front of the alerting revision.\n6. Click 'Delete'\n7. Click 'DELETE'." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains ""protoPayload.serviceName="" or $.X.filter contains ""protoPayload.serviceName ="") and ($.X.filter does not contain ""protoPayload.serviceName !="" and $.X.filter does not contain ""protoPayload.serviceName!="") and $.X.filter contains ""cloudresourcemanager.googleapis.com"" and ($.X.filter contains ""ProjectOwnership OR projectOwnerInvitee"" or $.X.filter contains ""ProjectOwnership or projectOwnerInvitee"") and ($.X.filter contains ""protoPayload.serviceData.policyDelta.bindingDeltas.action="" or $.X.filter contains ""protoPayload.serviceData.policyDelta.bindingDeltas.action ="") and ($.X.filter does not contain ""protoPayload.serviceData.policyDelta.bindingDeltas.action!="" and $.X.filter does not contain ""protoPayload.serviceData.policyDelta.bindingDeltas.action !="") and ($.X.filter contains ""protoPayload.serviceData.policyDelta.bindingDeltas.role="" or $.X.filter contains ""protoPayload.serviceData.policyDelta.bindingDeltas.role ="") and ($.X.filter does not contain ""protoPayload.serviceData.policyDelta.bindingDeltas.role!="" and $.X.filter does not contain ""protoPayload.serviceData.policyDelta.bindingDeltas.role !="") and $.X.filter contains ""REMOVE"" and $.X.filter contains ""ADD"" and $.X.filter contains ""roles/owner""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for Project Ownership assignments/changes This policy identifies the GCP account which does not have a log metric filter and alert for Project Ownership assignments/changes. Project Ownership Having highest level of privileges on a project, to avoid misuse of project resources project ownership assignment/change actions mentioned should be monitored and alerted to concerned recipients. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \n(protoPayload.serviceName=""cloudresourcemanager.googleapis.com"") AND (ProjectOwnership OR projectOwnerInvitee) OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=""REMOVE"" AND protoPayload.serviceData.policyDelta.bindingDeltas.role=""roles/owner"") OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=""ADD"" AND protoPayload.serviceData.policyDelta.bindingDeltas.role=""roles/owner"")\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ""(((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and publicAccessBlockConfiguration.ignorePublicAcls is false) or (policyStatus.isPublic is true and publicAccessBlockConfiguration.restrictPublicBuckets is false)) and websiteConfiguration does not exist) and ((policy.Statement[*].Condition.Bool.aws:SecureTransport does not exist) or ((policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action contains s3: or policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action[*] contains s3:) and (policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE)))""```","pkodoth - AWS S3 bucket not configured with secure data transport policy This policy identifies S3 buckets which are not configured with secure data transport policy. AWS S3 buckets should enforce encryption of data over the network using Secure Sockets Layer (SSL). It is recommended to add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:*) from anybody who browses (Principal: *) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. Navigate to Amazon S3 Dashboard\n3. Click on 'Buckets' (Left Panel)\n4. Choose the reported S3 bucket\n5. On 'Permissions' tab, Click on 'Bucket Policy'\n6. Add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:) from anybody who browses (Principal: ) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). Below is the sample policy:\n{\n ""Sid"": ""ForceSSLOnlyAccess"",\n ""Effect"": ""Deny"",\n ""Principal"": ""*"",\n ""Action"": ""s3:GetObject"",\n ""Resource"": ""arn:aws:s3:::bucket_name/*"",\n ""Condition"": {\n ""Bool"": {\n ""aws:SecureTransport"": ""false""\n }\n }\n}." "```config from cloud.resource where api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" and gceSetup.serviceAccounts[*].email contains ""compute@developer.gserviceaccount.com"" as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as Y; filter ' $.X.gceSetup.serviceAccounts[*].email equals $.Y.user'; show X;```","GCP Vertex AI Workbench Instance is using default service account with the editor role This policy identifies GCP Vertex AI Workbench Instances that are using the default service account with the Editor role. The Compute Engine default service account is automatically created with an autogenerated name and email address when you enable the Compute Engine API. By default, this service account is granted the IAM basic Editor role unless you explicitly disable this behavior. If this service account is assigned to a Vertex AI Workbench instance, it may lead to potential privilege escalation. In line with the principle of least privilege, it is recommended that Vertex AI Workbench Instances are not assigned the 'Compute Engine default service account', particularly when the Editor role is granted to the service account. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Identity and API access', use the dropdown to select a non-default service account as per needs\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.networkProfile.networkPlugin does not contain azure```,"Azure AKS cluster Azure CNI networking not enabled Azure CNI provides the following features over kubenet networking: - Every pod in the cluster is assigned an IP address in the virtual network. The pods can directly communicate with other pods in the cluster, and other nodes in the virtual network. - Pods in a subnet that have service endpoints enabled can securely connect to Azure services, such as Azure Storage and SQL DB. - You can create user-defined routes (UDR) to route traffic from pods to a Network Virtual Appliance. - Support for Network Policies securing communication between pods. This policy checks your AKS cluster for the Azure CNI network plugin and generates an alert if not found. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a new AKS cluster with the Azure CNI network plugin enabled, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/configure-azure-cni." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = 'extractable is false and state equals 1 and ((lastRotateDate does not exist and _DateTime.ageInDays(creationDate) > 90 ) or _DateTime.ageInDays(lastRotateDate) > 90)'```,"IBM Cloud Key Protect root key have aged more than 90 days without being rotated This policy identifies IBM Cloud Key Protect root keys that have aged more than 90 days without being rotated. It is a procedure that adheres to security best practices to rotate keys on a regular basis. So that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. Click on Menu Icon and navigate to 'Resource list', From the list of resources, select your provisioned instance of Key Protect in which the reported root key resides.\n3. Select the key and click on three dots on the right corner of the row to open the list of options for the key that you want to rotate.\n4. Click on 'Rotate'.\n5. In the 'Rotation' window, click on 'Rotate Key' \n6. In order to set the rotation policy, Under 'Manage rotation policy', enable 'Rotation policy' checkbox and select the day intervals for the key rotation as per the requirement.\n7. Click on 'Set policy' button to establish this policy.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = '(acl[*].email exists and acl[*].email contains logging) and (versioning.enabled is false or versioning does not exist)'```,"GCP Storage log buckets have object versioning disabled This policy identifies Storage log buckets which have object versioning disabled. Enabling object versioning on storage log buckets will protect your cloud storage data from being overwritten or accidentally deleted. It is recommended to enable object versioning feature on all storage buckets where sinks are configured. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable object versioning on a bucket:\n\nhttps://cloud.google.com/storage/docs/using-object-versioning#set." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine equals redis and transitEncryptionEnabled is false and replicationGroupId does not exist```,"AWS ElastiCache Redis with in-transit encryption disabled (Non-replication group) This policy identifies ElastiCache Redis that are in non-replication groups or individual ElastiCache Redis and have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your Redis and their associated cache storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS ElastiCache Redis in-transit encryption can be set, only at the time of creation. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-keys-list' AND json.rule = 'name contains iam.gserviceaccount.com and (_DateTime.ageInDays($.validAfterTime) > -1) and keyType equals USER_MANAGED'```,"bboiko test 02 - policy This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case ""Ready"" and require_secure_transport.value equal ignore case ""OFF""```","Azure MySQL database flexible server SSL enforcement is disabled This policy identifies Azure MySQL database flexible servers for which the SSL enforcement is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable MySQL database flexible server SSL connection, refer below URL:\nhttps://docs.microsoft.com/en-us/azure/mysql/flexible-server/how-to-connect-tls-ssl." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = settings.ipConfiguration.authorizedNetworks[?any(value contains 0.0.0.0/0 or value contains ::/0)] exists```,"GCP SQL instance configured with overly permissive authorized networks This policy identifies GCP Cloud SQL instances that are configured with overly permissive authorized networks. You can connect to the SQL instance securely by using the Cloud SQL Proxy or adding your client's public address as an authorized network. If your client application is connecting directly to a Cloud SQL instance on its public IP address, you have to add your client's external address as an Authorized network for securing the connection. It is recommended to add specific IPs instead of public IPs as authorized networks as per the requirement. Reference: https://cloud.google.com/sql/docs/mysql/authorize-networks This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Navigate to the 'Instances' page on section 'SQL'(Left Panel)\n3. Click on the alerted instance name \n4. Select the 'Connections' tab on the left panel\n5. Inspect for the networks added as Authorized Networks\n6. If any public IP is set for 'Authorized networks', review and delete the network by clicking the delete icon on the network\n7. Click on 'DONE'.\n8. Click on 'SAVE'.." ```config from cloud.resource where api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists```,"test This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = versioning.enabled is false or versioning does not exist```,"GCP Storage bucket with object versioning disabled This policy identifies GCP Storage buckets that have object versioning disabled. Object versioning is a method of keeping multiple variants of an object in the same storage bucket. Enabling object versioning on storage log buckets will protect your cloud storage data from being overwritten or accidentally deleted. It is recommended to enable the object versioning feature on all storage buckets. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate to the Cloud Storage Buckets page. Select 'Buckets' from the left panel\n3. Click on the reported bucket\n4. Go to the 'Protection' tab\n5. Under the 'Object versioning' section, select 'OBJECT VERSIONING OFF'\n6. In the 'Turn on object versioning' dialog, select the 'Add recommended lifecycle rules to manage version costs' checkbox if required.\n7. Click on 'CONFIRM'.." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' AND json.rule='logFileValidationEnabled is false'```,"AWS CloudTrail log validation is not enabled in all regions This policy identifies AWS CloudTrails in which log validation is not enabled in all regions. CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was modified after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Access the 'CloudTrail' service.\n4. For each trail reported, under Configuration > Storage Location, make sure 'Enable log file validation' is set to 'Yes'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = '(status equals RUNNING and name does not start with ""gke-"") and shieldedInstanceConfig exists and (shieldedInstanceConfig.enableVtpm is false or shieldedInstanceConfig.enableIntegrityMonitoring is false)'```","GCP VM instance with Shielded VM features disabled This policy identifies VM instances which have Shielded VM features disabled. Shielded VMs are virtual machines (VMs) on Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits. Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring. Shielded VM instances run firmware which is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot. NOTE: You can only enable Shielded VM options on instances that have Shielded VM support. This policy reports VM instances that have Shielded VM support and are disabled with the Shielded VM features. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate VM instances page\n3. STOP the reported VM instance before editing the instance\nNOTE: Before stoping the instance, Check the VM instance operational requirement.\n4. After the instance stops, click 'EDIT'\n5. In the Shielded VM section, select 'Turn on vTPM' and 'Turn on Integrity Monitoring'.\nOptionally, if you do not use any custom or unsigned drivers on the instance, also select 'Turn on Secure Boot'.\n6. Click on 'Save' and then START the instance.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""databases-for-mysql"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resourceGroupId"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Databases for MySQL service This policy identifies IBM Cloud users with administrator role permission for Databases for MySQL service. A user has full platform control as an administrator, including the ability to assign other users access policies and modify deployment passwords. If a user with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and check the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Databases for MySQL' service\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-functions-applications' AND json.rule = lifecycleState equal ignore case ACTIVE and (networkSecurityGroupIds does not exist or networkSecurityGroupIds[*] is empty)```,"OCI Function Application is not configured with Network Security Groups This policy identifies Function Applications that are not configured with Network Security Groups. OCI Function Applications allow you to execute code in response to events without provisioning or managing infrastructure. When these function applications are not configured with NSGs, they are more vulnerable to unauthorized access and potential security breaches. NSGs help isolate and protect your functions by ensuring that only trusted sources can communicate with them. As a best practice, it is recommended to restrict access to the application traffic by configuring network security groups. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Network Security Group for your function application, refer below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsusingnsgs.htm\nNOTE: Before you update Function Application with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement.." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (displayName contains ""Default Security List for"") and (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals ""all"") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)```","OCI Default Security List of every VCN allows all traffic on SSH port (22) This policy identifies OCI Default Security lists associated with every VCN that allow unrestricted ingress access to port 22. It is recommended that no security group allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and networkSecurityGroupIds[*] does not exist```,"OCI Load balancer not configured with Network Security Groups This policy identifies Load balancers that are not configured with Network Security Groups. Without Network Security Groups, load balancers may be exposed to unwanted traffic, increasing the risk of security breaches and unauthorized access. NSGs allow administrators to define security rules that specify the types of traffic allowed to flow in and out of the load balancer, enhancing overall network security. As a best practice, it is recommended to restrict access to the load balancer by configuring network security groups. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Networking -> Load Balancers\n3. Click on the reported load balancer\nNOTE: Before you update load balancer with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirements. \n4. On the 'Load Balancer Details' page, click on the 'Edit' button next to 'Network Security Groups' to make the changes.\n5. On the 'Edit Network Security Groups' dialog, select the restrictive Network Security Group and click on the 'Save Changes' button.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-export' AND json.rule = exportOptions[?any( identitySquash equals ROOT and (anonymousGid does not equal 65534 or anonymousUid does not equal 65534))] exists```,"OCI File Storage File System access is not restricted to root users This policy identifies the OCI File Storage File Systems that allow unrestricted access to root users. It is recommended that File Storage File Systems should limit root users access by restricting the privileges, to increase the security of File Systems. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the export path reported in the alert\n5. Click on Edit NFS Export Options\n6. Update the NFS Export Options where Squash is set Root and update Squash UID and Squash GID to 65534." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = extractable is false and state equals 1 and ((policy[*].rotation exists and policy[*].rotation.enabled is false ) or policy[*].rotation does not exist)```,"IBM Cloud Key Protect root key automatic key rotation is not enabled This policy identifies IBM Cloud Key Protect root keys that are not enabled with automatic key rotation. As a security best practice, it is important to rotate the keys periodically. So that if the keys are compromised, the data in the underlying service is still secure with the new keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. Click on Menu Icon and navigate to 'Resource list', From the list of resources, under security section, select your provisioned instance of Key Protect, in which the reported root key resides.\n3. Select the key and click on the three dots on the right corner of the row to open the list of options for the key for which you want to set the rotation policy.\n4. Click on 'Rotate'\n5. In order to set the rotation policy, Under 'Manage rotation policy' section, enable the 'Rotation policy' checkbox and select the day intervals for the key rotation as per the requirement.\n6. Click on the 'Set policy' button to establish this policy.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and ($.X.filterPattern contains ""eventSource="" or $.X.filterPattern contains ""eventSource ="") and ($.X.filterPattern does not contain ""eventSource!="" and $.X.filterPattern does not contain ""eventSource !="") and $.X.filterPattern contains organizations.amazonaws.com and $.X.filterPattern contains AcceptHandshake and $.X.filterPattern contains AttachPolicy and $.X.filterPattern contains CreateAccount and $.X.filterPattern contains CreateOrganizationalUnit and $.X.filterPattern contains CreatePolicy and $.X.filterPattern contains DeclineHandshake and $.X.filterPattern contains DeleteOrganization and $.X.filterPattern contains DeleteOrganizationalUnit and $.X.filterPattern contains DeletePolicy and $.X.filterPattern contains DetachPolicy and $.X.filterPattern contains DisablePolicyType and $.X.filterPattern contains EnablePolicyType and $.X.filterPattern contains InviteAccountToOrganization and $.X.filterPattern contains LeaveOrganization and $.X.filterPattern contains MoveAccount and $.X.filterPattern contains RemoveAccountFromOrganization and $.X.filterPattern contains UpdatePolicy and $.X.filterPattern contains UpdateOrganizationalUnit) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for AWS Organization changes This policy identifies the AWS regions that do not have a log metric filter and alarm for AWS Organizations changes. Monitoring changes to AWS Organizations will help to ensure any unwanted, accidental, or intentional modifications that may lead to unauthorized access or other security breaches within the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to AWS Organization's configurations. NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi trial enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = ""AcceptHandshake"") || ($.eventName = ""AttachPolicy"") || ($.eventName = ""CreateAccount"") || ($.eventName = ""CreateOrganizationalUnit"") || ($.eventName = ""CreatePolicy"") || ($.eventName = ""DeclineHandshake"") || ($.eventName = ""DeleteOrganization"") || ($.eventName = ""DeleteOrganizationalUnit"") || ($.eventName = ""DeletePolicy"") || ($.eventName = ""DetachPolicy"") || ($.eventName = ""DisablePolicyType"") || ($.eventName = ""EnablePolicyType"") || ($.eventName = ""InviteAccountToOrganization"") || ($.eventName = ""LeaveOrganization"") || ($.eventName = ""MoveAccount"") || ($.eventName = ""RemoveAccountFromOrganization"") || ($.eventName = ""UpdatePolicy"") || ($.eventName = ""UpdateOrganizationalUnit"")) }\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' AND json.rule = 'autoCreateSubnetworks does not exist'```,"GCP project is configured with legacy network This policy identifies the projects which have configured with legacy networks. Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. Subnetworks cannot be created in a legacy network. Legacy networks can have an impact on high network traffic projects and subject to the single point of failure. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: For each Google Cloud Platform project,\nFollow the documentation and delete the reported network which is in the legacy mode:\nhttps://cloud.google.com/vpc/docs/using-legacy." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = (restrictions.browserKeyRestrictions does not exist and restrictions.serverKeyRestrictions does not exist and restrictions.androidKeyRestrictions does not exist and restrictions.iosKeyRestrictions does not exist) or (restrictions.browserKeyRestrictions exists and (restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals ""*"")] exists or restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals ""*.[TLD]"")] exists or restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals ""*.[TLD]/*"")] exists)) or (restrictions.serverKeyRestrictions exists and (restrictions.serverKeyRestrictions[?any(allowedIps[*] equals 0.0.0.0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals 0.0.0.0/0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals ::/0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals ::0)] exists))```","GCP API key not restricted to use by specified Hosts and Apps This policy identifies GCP API key not restricted to use by specified Hosts and Apps. Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services' (Left Panel)\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. In the 'Key restrictions' section, set the application restrictions to any of HTTP referrers, IP Adresses, Android Apps, iOS Apps.\n5. Click 'SAVE'.\nNote: Do not set 'HTTP referrers' to wild-cards (* or *.[TLD] or *.[TLD]/*). \nDo not set 'IP addresses' restriction to any overly permissive IP (0.0.0.0 or 0.0.0.0/0 or ::0 or ::/0)." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-db-cluster' AND json.rule = 'storageEncrypted is false'```,"AWS RDS DB cluster encryption is disabled This policy identifies RDS DB clusters for which encryption is disabled. Amazon Aurora encrypted DB clusters provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon Aurora encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for data-at-rest encryption. NOTE: This policy is applicable only for Aurora DB clusters. https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS DB clusters can be encrypted only while creating the database cluster. You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS encryption key when you restore from the unencrypted DB cluster snapshot.\n\nFor AWS RDS,\n1. To create a 'Snapshot' of the unencrypted DB cluster, follow the instruction mentioned in below link:\nRDS Link: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html\n\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available', delete the unencrypted DB cluster before restoring from the DB cluster Snapshot by following below steps for AWS RDS,\na. Sign to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/\nb. In the navigation pane, choose 'Databases'.\nc. In the list of DB instances, choose a writer instance for the DB cluster.\nd. Choose 'Actions', and then choose 'Delete'.\n\n2. To restoring the Cluster from a DB Cluster Snapshot, follow the instruction mentioned in below link:\nRDS Link: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RestoreFromSnapshot.html\n\nFor AWS Document DB,\n1. To create a 'Snapshot' of the unencrypted DB cluster, follow the instruction mentioned in below link:\nDocument DB Link: https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-create_manual_cluster_snapshot.html\n\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available', delete the unencrypted DB cluster before restoring from the DB cluster Snapshot by following below steps for AWS Document DB, \n a. Sign to the AWS Management Console and open the Amazon DocumentDB console at https://console.aws.amazon.com/docdb/\n b. In the navigation pane, choose 'Clusters'.\n c. Select the cluster from the list which needs to be deleted\n d. Choose 'Actions', and then choose 'Delete'.\n\n2. To restoring the Cluster from a DB Cluster Snapshot, follow the instruction mentioned in below link:\nDocument DB Link: https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-restore_from_snapshot.html." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' and json.rule = groupName contains ""ahazra"" ```","Demo AWS Security Group overly permissive to all traffic This policy identifies Security groups that are overly permissive to all traffic. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-sql-server-list' AND json.rule = sqlEncryptionProtectors[*].kind != azurekeyvault and sqlEncryptionProtectors[*].properties.serverKeyType != AzureKeyVault and sqlEncryptionProtectors[*].properties.uri !exists```,"Azure SQL server TDE protector is not encrypted with BYOK (Use your own key) This policy identifies Bring Your Own Key(BYOK) support for Transparent Data Encryption(TDE) in SQL server. The data encryption key(DEK) can be protected with an asymmetric key that is stored in the Key Vault which allows user control of TDE encryption keys and restricts who can access them and when. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'SQL servers' dashboard, and select the SQL server instance you want to modify\n3. In the left navigation, select 'Transparent data encryption'\n4. Select Customer-managed key > Select a key > Change key\n- In Key vault, select an existing key vault or create new key vault\n- In Key, select an existing key or create a new key\n- In Version, select an existing version or create new version\nOR\nSelect Customer-managed key > Enter a key identifier\n- In Key identifier add key vault URI, if URI is already noted.\n5. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals does not equal Microsoft.Network/publicIPAddresses/write and properties.condition.allOf[?(@.field=='category')].['equals'] contains Administrative"" as X; count(X) less than 1```","Azure Activity Log alert for Create or Update Public IP does not exist This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running and sku.tier does not equal Basic as X; config from cloud.resource where api.name = 'azure-spring-cloud-app' AND json.rule = properties.provisioningState equals Succeeded and properties.enableEndToEndTLS is false as Y; filter '$.X.name equals $.Y.serviceName'; show Y;```,"Azure Spring Cloud app end-to-end TLS is disabled This policy identifies Azure Spring Cloud apps in which end-to-end TLS is disabled. Enabling end-to-end TLS/SSL will secure traffic from ingress controller to apps. After you enable end-to-end TLS and load a cert from the key vault, all communications within Azure Spring Cloud are secured with TLS. As a security best practice, it is recommended to have an end-to-end TLS to secure Spring Cloud apps traffic. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Azure Spring Cloud dashboard\n3. Choose Azure Spring Cloud service for which Azure Spring Cloud app is reported\n4. Under the 'Settings', click on 'Apps'\n5. Click on reported Azure Spring Cloud app\n6. Under the 'Settings', click on 'Ingress-to-app TLS'\n7. Set 'Yes' to 'Ingress-to-app TLS'." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(25,25)""```","Alibaba Cloud Security group allow internet traffic to SMTP port (25) This policy identifies Security groups that allow inbound traffic on SMTP port (25) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 25, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = 'scheduling.preemptible equals true and (status equals RUNNING and name does not start with ""gke-"")'```","GCP VM Instances enabled with Pre-Emptible termination Checks to verify if any VM instance is initiated with the flag 'Pre-Emptible termination' set to True. Setting this instance to True implies that this VM instance will shut down within 24 hours or can also be terminated by a Service Engine when high demand is encountered. While this might save costs, it can also lead to unexpected loss of service when the VM instance is terminated. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Once a VM instance is started with Pre-Emptible set to Yes, it cannot be changed. If this instance with Pre-Emptible set is a critical resource, then spin up a new VM instance with necessary services, processes, and updates so that there will be no interruption of services.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/delete"" as X; count(X) less than 1```","Azure Activity log alert for Delete network security group rule does not exist This policy identifies the Azure accounts in which activity log alert for Delete network security group rule does not exist. Creating an activity log alert for Delete network security group rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Security Rule (Microsoft.Network/networkSecurityGroups/securityRules)' and Other fields you can set based on your custom settings.\n6. Click on Create." ```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 45) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 45))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 45) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 45)))'```,"AWS access keys not used for more than 45 days This policy identifies IAM users for which access keys are not used for more than 45 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 45 days, then that access key needs to be deleted (even though the access key is inactive) This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equals Ready and properties.sslEnforcement equals Disabled```,"Azure MariaDB database server with SSL connection disabled This policy identifies MariaDB database servers for which SSL enforce status is disabled. Azure Database for MariaDB supports connecting your Azure Database for MariaDB server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. It is recommended to enforce SSL for accessing your database server. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SSL connection on an existing Azure Database for MariaDB, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/mariadb/howto-configure-ssl." "```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(80,80) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","IR-test-GCP Firewall rule allows all traffic on HTTP port (80) Test GCP policy to check cli remediation / can be deleted. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine equals memcached and transitEncryptionEnabled is false```,"AWS ElastiCache Memcached cluster with in-transit encryption disabled This policy identifies AWS ElastiCache Memcached clusters that have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps to prevent unauthorized users from reading sensitive data between your Memcached and their associated cache storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS ElastiCache Memcached in-transit encryption can be set, only at the time of creation. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Memcached cluster data from the reported ElastiCache Memcached cluster to this newly created cluster and delete reported ElastiCache Memcached cluster.\n\nTo create new ElastiCache Memcached cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache\n4. Click on 'Memcached clusters' under 'Resources'\n5. Click on 'Create Memcached clusters' button\n6. On the 'Cluster settings' page,\na. Enter a name for the new cache cluster\nb. Select Memcached engine version from 'Engine version' dropdown list\nNote: As of September 2022,In-transit encryption can be enabled only for AWS ElastiCache clusters with Memcached engine version 1.6.12 or later\nc. Enter the 'Subnet group settings' and click on 'Next'\nd. Under 'Security', Select 'Enable' checkbox under 'Encryption in transit'\ne. Fill in other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Memcached cluster\n\nTo delete reported ElastiCache Memcached cluster follow below given URL:\nhttps://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/GettingStarted.DeleteCacheCluster.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.addonProfiles.azureKeyvaultSecretsProvider.enabled is false```,"Azure AKS cluster is not configured with disk encryption set This policy identifies AKS clusters that are not configured with disk encryption set. Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of an Azure key vault as a secrets store with an Azure Kubernetes Service (AKS) cluster via a CSI volume. It is recommended to enable secret store CSI driver for your Kubernetes clusters. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Kubernetes services dashboard\n3. Click on the reported Kubernetes cluster\n4. Under Setting section, Click on 'Cluster configuration'\n5. Select 'Enable secret store CSI driver'\nNOTE: Once the CSI driver is enabled, Azure will deploy additional pods onto the cluster. You'll still need to configure Azure Key Vault, define secrets to securely fetch, and redeploy the application to use these secrets.\nFor more details: https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/standard-walkthrough/\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isLowercaseCharactersRequired isFalse'```,"OCI IAM password policy for local (non-federated) users does not have a lowercase character This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a lowercase character in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4.Click Edit Authentication Settings in the middle of the page.\n5.Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 LOWERCASE CHARACTER.\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' AND json.rule = volumeBackupPolicyAssignment[*] size equals 0 and volumeGroupId equal ignore case ""null""```","OCI Block Storage Block Volume does not have backup enabled This policy identifies the OCI Block Storage Volumes that do not have backup enabled. It is recommended to have block volume backup policies on each block volume so that the block volume can be restored during data loss events. Note: This Policy is not applicable for block volumes that are added to volume groups. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Edit button\n5. Select the Backup Policy from the Backup Policies section as appropriate\n6. Click Save Changes." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains ""roles/editor"" or roles[*] contains ""roles/owner"" as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and name does not start with ""gke-"" as Y; filter '$.Y.serviceAccounts[*].email contains $.X.user'; show Y;```","GCP VM instance has risky basic role assigned This policy identifies GCP VM instances configured with the risky basic role. Basic roles are highly permissive roles that existed prior to the introduction of IAM and grant wide access over project to the grantee. To reduce the blast radius and defend against privilege escalations if the VM is compromised, it is recommended to follow the principle of least privilege and avoid use of basic roles. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: It is recommended to the principle of least privilege for granting access.\n\nTo create a new instance with desired service account, please refer to the URL given below:\nhttps://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#using\n\nTo update the assigned service account to VM, please refer to the URL given below:\nhttps://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes\n\nTo update priviledges granted to a service account, please refer to the URL given below:\nhttps://cloud.google.com/iam/docs/granting-changing-revoking-access." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(1434,1434) or destinationPortRanges[*] contains _Port.inRange(1434,1434) ))] exists```","Azure Network Security Group allows all traffic on SQL Server (UDP Port 1434) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SQL Server (UDP Port 1434). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SQL Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-run-services-list' AND json.rule = ""status.conditions[?any(type equals Ready and status equals True)] exists and status.conditions[?any(type equals RoutesReady and status equals True)] exists and ['metadata'].['annotations'].['run.googleapis.com/ingress'] equals all""```","GCP Cloud Run service with overly permissive ingress rule This policy identifies GCP Cloud Run services configured with overly permissive ingress rules. It is recommended to restrict the traffic from the internet and other resources by allowing traffic to enter through load balancers or internal traffic for better network-based access control. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to service 'Cloud Run'\n3. Click on the alerted service, go to tab 'TRIGGERS'\n4. Under section 'Ingress', select a ingress type other than 'Allow all traffic'\n5. Click on 'SAVE'." "```config from cloud.resource where api.name = 'aws-connect-instance' AND json.rule = InstanceStatus equals ""ACTIVE"" and storageConfig[?any( resourceType is member of ('CHAT_TRANSCRIPTS','CALL_RECORDINGS','SCREEN_RECORDINGS') and storageConfigs[*] exists )] exists as X; config from cloud.resource where api.name='aws-s3api-get-bucket-acl' AND json.rule = ""((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))"" as Y; filter ' $.X.storageConfig[*].storageConfigs[*].S3Config.BucketName intersects $.Y.bucketName' ; show Y;```","AWS Connect instance using publicly accessible S3 bucket This policy identifies the S3 bucket used by AWS Connect instances for storing CHAT_TRANSCRIPTS, CALL_RECORDINGS, and SCREEN_RECORDINGS, which are publicly accessible. The S3 bucket containing CHAT_TRANSCRIPTS, CALL_RECORDINGS, or SCREEN_RECORDINGS being publicly accessible is significant, as it exposes sensitive customer data and internal data to the public. It is recommended to secure the identified S3 buckets by enforcing stricter access controls and eliminating public read permissions for the reported S3 bucket used for AWS Connect instances. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the publicly accessible setting of a bucket, perform the following actions:\n1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\n a. Under 'Access Control List', Click on 'Everyone' and uncheck all items\n b. Under 'Access Control List', Click on 'Authenticated users group' and uncheck all items\n c. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\n a. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\n b. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n c. Click on Save changes\nNote: Ensure updating the 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 443 or fromPort == 443) or (toPort > 443 and fromPort < 443)))] exists)```,"Allowing all to HTTPS This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = resourceRecordSet[?any( resourceRecords[*].value contains s3-website or aliasTarget.dnsname contains s3-website )] exists as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not($.X.resourceRecordSet[*].name contains $.Y.bucketName)'; show X;```,"AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing S3 bucket will have a risk of these dangling domain entries being taken over by an attacker by creating a similar S3 bucket in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = deployment.enable_public_endpoints is true```,"IBM Cloud Database MySQL is exposed to public The policy identifies IBM Cloud Database MySQL instances exposed to the public via public endpoints. When provisioning an IBM Cloud database service, it is generally not recommended to use public endpoints because it can pose a security risk. Public endpoints can make your database accessible to anyone with internet access, potentially leaving your data vulnerable to unauthorized access or malicious attacks. Instead, it is recommended to use private endpoints when provisioning a database service in IBM Cloud. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Refer to the IBM documentation to change the service endpoints from public to private\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-service-endpoints." "```config from cloud.resource where cloud.type = 'aws' and api.name='aws-rds-describe-db-snapshots' AND json.rule=""attributes[?(@.attributeName=='restore')]‌‌.attributeValues[*] contains all""```","AWS RDS snapshots are accessible to public This policy identifies AWS RDS snapshots which are accessible to public. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to setup and manage databases. If RDS snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service.\n4. For the RDS instance reported in the alert, change 'Publicly Accessible' setting to 'No'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and rootAccess equals Enabled and notebookInstanceLifecycleConfigName does not exist```,"AWS SageMaker notebook instance with root access enabled This policy identifies the SageMaker notebook instances which are enabled with root access. Root access means having administrator privileges, users with root access can access and edit all files on the compute instance, including system-critical files. Removing root access prevents notebook users from deleting system-level software, installing new software, and modifying essential environment components. NOTE: Lifecycle configurations need root access to be able to set up a notebook instance. Because of this, lifecycle configurations associated with a notebook instance always run with root access even if you disable root access for users. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-root-access.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances (Left panel)\n4. Click on the reported SageMaker notebook instance\nNote: To update root access for SageMaker notebook instances; Instances need to be stopped. So stop running instance before editing.\n5. In the 'Notebook instance settings' section, click on 'Edit'\n6. On the Edit notebook instance page, within the 'Permissions and encryption' section,\nFrom the 'Root access - optional' options, select 'Disable - Don't give users root access to the notebook'\n7. Click on the 'Update notebook instance'." ```config from cloud.resource where Resource.status = Active AND api.name = 'aws-application-autoscaling-scaling-policy' as Y; config from cloud.resource where api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE AND billingModeSummary.billingMode does not equal PAY_PER_REQUEST as X; filter 'not($.Y.ResourceName equals $.X.tableName)'; show X;```,"AWS DynamoDB table Auto Scaling not enabled This policy identifies AWS DynamoDB tables with auto-scaling disabled. DynamoDB is a fully managed NoSQL database that provides a highly reliable, scalable, low-latency database solution for applications that require consistent, single-digit millisecond latency at any scale. Auto-scaling functionality allows you to dynamically alter the allocated throughput capacity for your DynamoDB tables based on current traffic patterns. This feature employs the Application Auto Scaling service to automatically boost provisioned read and write capacity to manage unexpected traffic increases and reduce throughput when the workload falls in order to avoid paying for wasted supplied capacity. It is recommended to enable auto-scaling for the DynamoDB table to ensure efficient resource utilisation, cost optimisation, improved performance, simplified management, and scalability. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable auto-scaling for a DynamoDB table through the AWS Management Console, follow these steps:\n\n1. Sign into the AWS console. Navigate to the DynamoDB console.\n2. In the navigation pane, choose 'Tables'.\n3. Select the table you want to enable auto-scaling for.\n4. Choose the 'Additional settings' tab.\n5. In the 'Read/write capacity' section, choose 'Edit'.\n6. In the 'Capacity mode' section, choose 'Provisioned'.\n7. For 'Table capacity', set 'Auto scaling' to 'On' for read capacity, write capacity, or both.\n8. Set the minimum and maximum capacity units, and the target utilization percentage for read and write capacity.\n9. Choose 'Save changes'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.Owner))] exists```,"AWS SNS topic with cross-account access This policy identifies AWS SNS topics that are configured with cross-account access. Allowing unknown cross-account access to your SNS topics will enable other accounts and gain control over your AWS SNS topics. To prevent unknown cross-account access, allow only trusted entities to access your Amazon SNS topics by implementing the appropriate SNS policies. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['CROSS_ACCOUNT_TRUST']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. In the Access Policy section, verify all ARN values in 'Principal' elements are from trusted entities; If not remove those ARN from the entry.\n9. Click on 'Save changes'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='cloudsql.enable_pgaudit')] does not exist or settings.databaseFlags[?(@.name=='cloudsql.enable_pgaudit')].value does not equal on)""```","GCP PostgreSQL instance database flag cloudsql.enable_pgaudit is not set to on This policy identifies PostgreSQL database instances in which database flag cloudsql.enable_pgaudit is not set to on. Enabling the flag cloudsql.enable_pgaudit enables the logging by pgAudit extenstion for the database (if installed). The pgAudit extenstion for PostgreSQL databases provides detailed session and object logging to comply with government, financial, & ISO standards and provides auditing capabilities to mitigate threats by monitoring security events on the instance. Any changes to the database logging configuration should be made in accordance with the organization's logging policy. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to set the 'cloudsql.enable_pgaudit' flag to 'on' for PostgreSQL database.\n\nTo update the flag of GCP PostgreSQL instance, please refer to the URL given below and set cloudsql.enable_pgaudit flag to on:\nhttps://cloud.google.com/sql/docs/postgres/flags#set_a_database_flag." ```config from cloud.resource where api.name = 'azure-key-vault-list' AND json.rule = 'properties.enableSoftDelete does not exist or properties.enablePurgeProtection does not exist'```,"Azure Key Vault is not recoverable The key vault contains object keys, secrets and certificates. Accidental unavailability of a key vault can cause immediate data loss or loss of security functions (authentication, validation, verification, non-repudiation, etc.) supported by the key vault objects. It is recommended the key vault be made recoverable by enabling the ""Do Not Purge"" and ""Soft Delete"" functions. This is in order to prevent loss of encrypted data including storage accounts, SQL databases, and/or dependent services provided by key vault objects (Keys, Secrets, Certificates) etc., as may happen in the case of accidental deletion by a user or from disruptive activity by a malicious user. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Azure Portal\nAzure Portal does not have provision to update the respective configurations\n\nAzure CLI 2.0\naz resource update --id /subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups//providers/Microsoft.KeyVault/vaults/ --set properties.enablePurgeProtection=true properties.enableSoftDelete=true." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","RLP-83104 - Copy of Critical of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = enaSupport is true and clientToken contains ""foo"" ```","ajay ec2 describe This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = domainProcessingStatus equal ignore case active and (logPublishingOptions does not exist or logPublishingOptions.ES_APPLICATION_LOGS.enabled is false)```,"AWS Opensearch domain Error logging disabled This policy identifies AWS Opensearch domains with no error logging configuration. Opensearch application logs contain information about errors and warnings raised during the operation of the service and can be useful for troubleshooting. Error logs from domains can aid in security assessments, access monitoring, and troubleshooting availability problems. It is recommended to enable the AWS Opensearch domain with error logs, which will help in security audits and troubleshooting. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS Opensearch domain with error logs:\n\n1. Sign into the AWS console and navigate to the Opensearch Service Dashboard\n2. In the navigation pane, under 'Managed Clusters', select 'Domains'\n2. Choose the reported Elasticsearch domain\n3. On the Logs tab, select 'Error logs' and choose 'Enable'.\n4. In the 'Set up error logs' section, in the 'Select log group from CloudWatch logs' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n5. In 'Specify CloudWatch access policy', create new/Select an existing policy as per your requirement\n6. Click on 'Enable'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-macie2-session' AND json.rule = status equals ""ENABLED"" as X; count(X) less than 1```","AWS Macie is not enabled This policy identifies the AWS Macie that is not enabled in specific regions. AWS Macie is a data security service that automatically discovers, classifies, and protects sensitive data in AWS, enhancing security and compliance posture. Failure to activate AWS Macie increases the risk of potentially missing out on automated detection and protection of sensitive data, leaving your organization more vulnerable to data breaches and compliance violations. It is recommended to enable Macie in all regions for comprehensive adherence to security and compliance requirements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Macie in the specific region,\n\n1. Log in to your AWS Management Console.\n2. By using the AWS Region selector in the upper-right corner of the page, select the Region which is reported.\n3. In the AWS Management Console, search for ""Macie"" in the services search bar or locate it under the ""Security, Identity, & Compliance"" category.\n4. On the Amazon Macie page, choose Get started.\n5. Choose Enable Macie.\n\nTo re-enable macie after suspended in the region,\n\n1. Log in to your AWS Management Console.\n2. By using the AWS Region selector in the upper-right corner of the page, select the Region which is reported.\n3. In the AWS Management Console, search for ""Macie"" in the services search bar or locate it under the ""Security, Identity, & Compliance"" category.\n4. In the Macie dashboard, navigate to the 'settings' section.\n5. Click on the 'Re-enable Macie' button under the 'Suspend Macie' section.\n\nAfter enabling Macie, you can further configure policies, alerts, and other settings according to your organization's security and compliance needs.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'restrictions.geoRestriction.restrictionType contains none'```,"AWS CloudFront web distribution with geo restriction disabled This policy identifies CloudFront web distributions which have geo restriction feature disabled. Geo Restriction has the ability to block IP addresses based on Geo IP by allowlist or denylist a country in order to allow or restrict users in specific locations from accessing web application content. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On 'Restrictions' tab, Click on the 'Edit' button\n6. On 'Edit Geo-Restrictions' page, Set 'Enable Geo-Restriction' to 'Yes' and allowlist/denylist countries as per your requirement.\n7. Click on 'Yes, Edit'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals RUNNABLE and databaseVersion contains MYSQL and (settings.databaseFlags[*].name does not contain skip_show_database or settings.databaseFlags[?any(name contains skip_show_database and value does not contain on)] exists)```,"GCP MySQL instance database flag skip_show_database is not set to on This policy identifies Mysql database instances in which database flag skip_show_database is not set to on. This prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. It is recommended to set skip_show_database to on. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported MYSQL instance\n4. Click on 'EDIT'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'skip_show_database' from the drop-down menu and set the value as 'on'\nOR\nIf the flag has been set to off, Under 'Customize your instance', In 'Flags' section choose the flag 'skip_show_database' and set the value as 'on', Click on DONE\n6. Click on 'DONE' and then 'SAVE' and if popup window appears, select 'SAVE AND RESTART'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = 'roles[*] contains roles/cloudkms.admin and roles[*] contains roles/cloudkms.crypto'```,"GCP IAM user have overly permissive Cloud KMS roles This policy identifies IAM users who have overly permissive Cloud KMS roles. Built-in/Predefined IAM role Cloud KMS Admin allows the user to create, delete, and manage service accounts. Built-in/Predefined IAM role Cloud KMS CryptoKey Encrypter/Decrypter allows the user to encrypt and decrypt data at rest using the encryption keys. It is recommended to follow the principle of 'Separation of Duties' ensuring that one individual does not have all the necessary permissions to be able to complete a malicious action. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM\n4. From the list of users, choose the reported IAM user\n5. Click on Edit permissions pencil icon\n6. For member having 'Cloud KMS Admin' and any of the 'Cloud KMS CryptoKey Encrypter/Decrypter', 'Cloud KMS CryptoKey Encrypter', 'Cloud KMS CryptoKey Decrypter' or any CryptoKey roles granted/assigned, Click on the Delete Bin icon to remove the role from a member." ```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```,"xnbnuowcaz_ui_auto_policies_tests_name kszyashfvs_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = allowedListIPAddresses[*] size equals 0 or allowedListIPAddresses[?any( address equals 0.0.0.0/0 )] exists```,"IBM Cloud PostgreSQL Database network access is not restricted to a specific IP range This policy identifies IBM Cloud PostgreSQL Databases with no specified IP range for network access. To restrict access to your databases, you can allowlist specific IP addresses or ranges of IP addresses on your deployment. When no IP addresses are in the allowlist, the allowlist is disabled and the deployment accepts connections from any IP address. It is recommended to create an allowlist, only IP addresses that match the allowlist or are in the range of IP addresses in the allowlist can connect to your deployment. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select PostgreSQL database reported in the alert.\n3. Refer below URL for setting allowlist IP addresses : \nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-allowlisting&interface=ui#set-allowlist-ui\n4. Please remove IP address starting with '0.0.0.0' if any added already in the allow list and make sure to add IP address other than '0.0.0.0'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = ['attributes'].['deletion_protection.enabled'] contains false```,"AWS Elastic Load Balancer v2 (ELBv2) with deletion protection disabled This policy identifies Elastic Load Balancers v2 (ELBv2), which are configured with the deletion protection feature disabled. AWS Elastic Load Balancer automatically distributes incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, to improve the availability and fault tolerance of applications. To prevent your load balancer from being deleted accidentally, you can enable deletion protection. It is recommended to enable deletion protection on AWS Elastic load balancers to protect them from being deleted accidentally. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable deletion protection on load balancer:\n\n1. Log in to the AWS console. Navigate to EC2 dashboard\n2. Select 'Load Balancers'\n3. Click on the reported Load Balancer\n4. On the 'Attributes' tab, choose 'Edit'\n5. On the Edit load balancer attributes page, select 'Enable' for 'Delete Protection'\n6. Click on 'Save' to save your changes.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ""(((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and publicAccessBlockConfiguration.ignorePublicAcls is false) or (policyStatus.isPublic is true and publicAccessBlockConfiguration.restrictPublicBuckets is false)) and websiteConfiguration does not exist) and ((policy.Statement[*].Condition.Bool.aws:SecureTransport does not exist) or ((policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action contains s3: or policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action[*] contains s3:) and (policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE)))""```","AWS S3 bucket not configured with secure data transport policy This policy identifies S3 buckets which are not configured with secure data transport policy. AWS S3 buckets should enforce encryption of data over the network using Secure Sockets Layer (SSL). It is recommended to add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:*) from anybody who browses (Principal: *) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. Navigate to Amazon S3 Dashboard\n3. Click on 'Buckets' (Left Panel)\n4. Choose the reported S3 bucket\n5. On 'Permissions' tab, Click on 'Bucket Policy'\n6. Add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:) from anybody who browses (Principal: ) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). Below is the sample policy:\n{\n ""Sid"": ""ForceSSLOnlyAccess"",\n ""Effect"": ""Deny"",\n ""Principal"": ""*"",\n ""Action"": ""s3:GetObject"",\n ""Resource"": ""arn:aws:s3:::bucket_name/*"",\n ""Condition"": {\n ""Bool"": {\n ""aws:SecureTransport"": ""false""\n }\n }\n}." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action contains *)] exists```,"AWS IAM policy is overly permissive to all traffic via condition clause This policy identifies IAM policies that have a policy that is overly permissive to all traffic via condition clause. If any IAM policy statement with a condition containing 0.0.0.0/0 or ::/0, it allows all traffic to resources attached to that IAM policy. It is highly recommended to have the least privileged IAM policy to protect the data leakage and unauthorized access. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the IAM dashboard\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the alert is generated and click on it.\n5. Under the Permissions tab, click on Edit policy\n6. Under the Visual editor, click to expand and perform following;\na. Click to expand 'Request conditions'\nb. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = '(dnssecConfig.state does not exist or dnssecConfig.state equals off) and visibility equals public'```,"GCP Cloud DNS has DNSSEC disabled This policy identifies GCP Cloud DNS which has DNSSEC disabled. Domain Name System Security Extensions (DNSSEC) adds security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated. Attackers can hijack the process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to fake websites. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP portal \n2. Go to Network services\n3. Choose Cloud DNS\n4. Click on reported Cloud DNS / Zone name\n5. Under 'DNSSEC' column choose 'On' from the drop-down." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (logPublishingOptions does not exist or logPublishingOptions.INDEX_SLOW_LOGS.enabled is false or logPublishingOptions.INDEX_SLOW_LOGS.cloudWatchLogsLogGroupArn is empty)'```,"AWS Elasticsearch domain has Index slow logs set to disabled This policy identifies Elasticsearch domains for which Index slow logs is disabled in your AWS account. Enabling support for publishing indexing slow logs to AWS CloudWatch Logs enables you have full insight into the performance of indexing operations performed on your Elasticsearch clusters. This will help you in identifying performance issues caused by specific queries or due to changes in cluster usage, so that you can optimize your index configuration to address the problem. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Select the 'Logs' tab\n6. In 'Set up Index slow logs' section,\n a. click on 'Setup'\n b. In 'Select CloudWatch Logs log group' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n c. In 'Specify CloudWatch access policy', Create new/Select an existing policy as per your requirement\n d. Click on 'Enable'\n\nThe Index slow logs setting 'Status' should change now to 'Enabled'.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals username_password and (expiration_date does not exist or (_DateTime.ageInDays(expiration_date) > -1))'```,"IBM Cloud Secrets Manager has expired user credentials This policy identifies IBM Cloud Secrets Manager user credential which is expired. User credentials should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all user credentials are set with expiration date and expired secrets should be regularly rotated. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: If the IBM Cloud Secrets Manager user credentials secret is expired, secret needs to be deleted.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-delete-secrets&interface=ui#delete-secret-ui\n\nIf the IBM Cloud Secrets Manager user credentials is about to be expired, secret has to be rotated.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-manual-rotation&interface=ui#manual-rotate-user-credentials-ui\n\nPlease make sure to set an expiration date for each secret. Please follow the below steps to set an expiration date:\n1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list', from the list of resources select secret manager instance in which reported secret resides, under security section.\n3. Select the secret.\n4. Under 'Expiration date' section, provide expiration date as required.\n5. Click on 'Update'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals ""0.0.0.0"" and endIpAddress equals ""255.255.255.255"")] exists```","Azure SQL Servers Firewall rule allow access to all IPV4 address This policy identifies Azure SQL Servers which has Firewall rule that allow access to all IPV4 address. Having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 would allow access to SQL server from any host on the internet. It is highly recommended not to use this type of firewall rule in any SQL servers. This is applicable to azure cloud and is considered a critical severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n4. Click on the reported SQL server\n5. Click on 'Networking' under Security\n6. In 'Public access' tab, Under Firewall rules, Delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255\n7. Click on 'Save'." "```config from cloud.resource where api.name = 'gcp-compute-disk-list' AND json.rule = status equals READY and name does not start with ""gke-"" and diskEncryptionKey.sha256 does not exist```","GCP VM disks not encrypted with Customer-Supplied Encryption Keys (CSEK) This policy identifies VM disks which are not encrypted with Customer-Supplied Encryption Keys (CSEK). If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. It is recommended to use VM disks encrypted with CSEK for business-critical VM instances. Limitation: This policy might give false negatives in case VM disks are created with name prefix 'gke-'. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Currently, we can not update the encryption of an existing disk. So to fix this alert, Create a new VM disk with Encryption set to Customer supplied, migrate all required data from reported VM disk to newly created disk and delete the reported VM disk.\n\n1. Login to GCP Portal\n2. Go to Compute Engine\n3. Go to Disks\n4. Click on Create a disk\n5. Specify other disk parameters as you desire\n6. Set Encryption to Customer-supplied key\n7. Provide the Key in the box\n8. Select Wrapped key\n9. Click on Create." ```config from cloud.resource where api.name = 'oci-database-autonomous-database' AND json.rule = lifecycleState equal ignore case AVAILABLE and dataSafeStatus does not equal ignore case REGISTERED```,"OCI Autonomous Database not registered in Data Safe This policy identifies Oracle Autonomous Databases that are not registered in Oracle Data Safe. Oracle Data Safe is a fully-integrated cloud service that focuses on the security of your data, providing comprehensive features for protecting sensitive and regulated information in Oracle databases. Through the Security Center, you can access functionalities such as user and security assessments, data discovery, data masking, activity auditing, and alerts. As best practice, it is recommended to register the Autonomous Database in Data Safe. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Autonomous Database with datasafe, refer to the following documentation:\nhttps://docs.oracle.com/en/cloud/paas/data-safe/admds/register-autonomous-database.html#GUID-19A85842-A81C-4F40-A1EE-13C40EA845F0\nor\nhttps://docs.oracle.com/en-us/iaas/tools/oci-cli/3.43.2/oci_cli_docs/cmdref/db/autonomous-database/data-safe/register.html." ```config from cloud.resource where api.name = 'aws-iam-list-groups' as X; config from cloud.resource where api.name = 'aws-iam-list-users' as Y; filter ' not ($.Y.groupList[*] intersects $.X.groupName)'; show X;```,"AWS IAM group not in use This policy identifies AWS IAM groups that are not actively in use. An AWS IAM group is a collection of IAM users managed together, allowing for unified permission assignment. These groups, if not assigned any users, pose a potential security risk if left unmanaged and can inadvertently grant unauthorized access to AWS services and resources. It is recommended to review and remove any unused IAM groups to prevent attaching unauthorized IAM users. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete an IAM user group (console)\n\n1. Sign in to the AWS Management Console\n2. Navigate to the 'Services' menu and, within the 'Security, Identity, & Compliance' category, choose the 'IAM' service to open the IAM console\n3. In the IAM console's navigation pane, select 'User groups' located under the 'Access management' section\n4. In the list of user groups, select the check box next to the name of the reported user group to delete. You can use the search box to filter the list of user group name.\n5. Choose 'Delete' to delete the group\n6. In the confirmation box, If you want to delete user groups, type 'delete' and choose 'Delete'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = '($.acl[*].email exists and $.acl[*].email contains logging) and ($.acl[*].entity contains allUsers or $.acl[*].entity contains allAuthenticatedUsers)'```,"GCP Storage Buckets with publicly accessible GCP logs Checks to ensure that Stackdriver logs on Storage Buckets are not Giving public access to Stackdriver logs will enable anyone with a web association to retrieve sensitive information that is critical to business. Stackdriver Logging enables to store, search, investigate, monitor and alert on log information/events from Google Cloud Platform. The permission needs to be set only for authorized users. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set an ACL, please refer to the URL given below. Make sure that no ACL is set to allow 'allUsers' or 'allAuthenticatedUsers' for the reported bucket.\nhttps://cloud.google.com/storage/docs/access-control/create-manage-lists#set-an-acl." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.systemConfigurationsMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud security configurations monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have security configurations monitoring set to disabled. Security configurations will enable the daily analysis of operating system configurations. The rules for hardening the operating system like firewall rules, password and audit policies are reviewed. Recommendations are made for setting the right level of security controls. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Vulnerabilities in security configuration on your machines should be remediated' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = clientToken is not empty AND monitoring.state contains ""running""```","Venu Test This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-fsx-file-system' AND json.rule = FileSystemType equals ""OPENZFS"" and Lifecycle equals ""AVAILABLE"" and (OpenZFSConfiguration.CopyTagsToBackups is false or OpenZFSConfiguration.CopyTagsToVolumes is false )```","AWS FSx for OpenZFS file systems not configured to copy tags to backups or volumes This policy identifies the AWS FSx for OpenZFS file system is configured to copy tags to backups or volumes. AWS FSx for OpenZFS is a managed service for deploying and scaling OpenZFS file systems on AWS. Tags make resource identification and management easier, ensuring consistent security policies across file systems. Without copying tags to backups and volumes in AWS FSx for OpenZFS, enforcing consistent access control and tracking sensitive data in these resources becomes challenging. It is recommended to configure an FSx for the OpenZFS file system to copy tags to backups and volumes. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure an AWS FSx for OpenZFS file system to copy tags to backups and volumes, perform following actions:\n\n1. Sign in to your AWS account and Open the Amazon FSx console.\n2. In the left navigation pane, choose 'File systems', and then choose the FSx for OpenZFS file system that is reported.\n3. For 'Actions', choose 'Update tags preferences'. The Update tags preferences dialog box displays.\n4. For 'Copy tags to backups', select 'Enabled' to copy tags from the file system to any backup thats taken.\n5. For 'Copy tags to volumes', select 'Enabled' to copy tags from the file system to any volume that you create.\n6. Choose Update to update the file system with your changes.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(53,53) or destinationPortRanges[*] contains _Port.inRange(53,53) ))] exists```","Azure Network Security Group allows all traffic on NetBIOS DNS (UDP Port 53) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on DNS UDP port 53. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = mfa equal ignore case ""NONE""```","IBM Cloud Multi-Factor Authentication (MFA) not enabled at the account level This policy identifies IBM Cloud accounts where Multi-Factor Authentication (MFA) is not enabled at the account level. MFA adds an extra layer of protection on top of your user name and password and helps protect accounts from stolen, phished, or weak password exploits. Enabling IBM MFA at the account level is the recommended approach to protect users. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable IBM MFA:\n\nhttps://cloud.ibm.com/docs/account?topic=account-enablemfa#enabling." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-oracledatabase-bmvm-dbsystem' AND json.rule = 'lifecycleState equals AVAILABLE and nsgIds contains null'```,"OCI Database system is not configured with Network Security Groups This policy identifies Oracle Cloud Infrastructure (OCI) Database Systems that are not configured with Network Security Groups (NSGs). Network Security Groups provide granular security controls at the instance level, allowing for more precise management of inbound and outbound traffic to database systems. It is recommended to configure database systems with NSGs to enhance their security thereby mitigating the risk of unauthorized access and potential data breaches. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To manage Network Security Groups for a DB System, follow below URL:\nhttps://docs.oracle.com/en-us/iaas/base-database/doc/manage-network-security-groups-db-system.html\n\nNOTE: Before you update DB Systems with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'```,"Bobby Copy of AWS Access logging not enabled on S3 buckets Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-compute-instances-list' AND json.rule = (networkInterfaces[*].accessConfigs[*].type exists and networkInterfaces[*].accessConfigs[*].type contains ""ONE_TO_ONE_NAT"") and (labels.goog-composer-environment does not exist and tags.items[*] does not contain ""dataflow"") and (metadata.items[*].key does not equal ""nat"" and metadata.items[*].value does not equal ""TRUE"") and (name does not contain ""paloALTO"")```","CNA customer FASDFDSAF This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = state_description equal ignore case active and secret_type is member of (private_cert, public_cert) and rotation.auto_rotate is false```","IBM Cloud Secrets Manager certificate not configured with automatic rotation This policy identifies IBM Cloud Secrets Manager certificates that are not configured with automatic rotation. IBM Cloud Secrets Manager allows you to manage various types of certificates, including those from imported third-party certificate authorities, public certificates, and private certificates, providing a centralised platform for secure certificate storage and management. Securely storing and timely rotating certificates before expiration is crucial for maintaining a high security posture and avoiding any service disruptions. It is recommended to set IBM Cloud Secrets Manager certificates with auto-rotation. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a rotation policy for a certificate, follow the below steps:\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list', From the list of resources, select the secret manager instance in which the reported secret resides under the security section.\n3. Select the secret.\n4. Under the 'Rotation' tab, enable 'Automatic secret rotation'.\n5. Set 'Rotation Interval' according to the requirements.\n6. Click on 'Update'.\n\nNote: Imported certificates cannot be set with an automatic rotation policy; they have to be re-imported before expiration.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.ipAllowlist does not exist or properties.ipAllowlist is empty)```,"Azure Machine learning workspace configured with overly permissive network access This policy identifies Machine learning workspaces configured with overly permissive network access. Overly permissive public network access allows access to resource through the internet using a public IP address. It is recommended to restrict IP ranges to allow access to your workspace and endpoint from specific public internet IP address ranges and is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict internet IP ranges on your existing Machine learning workspace, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link?view=azureml-api-2&tabs=azure-portal#enable-public-access-only-from-internet-ip-ranges-preview." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(21,21) or destinationPortRanges[*] contains _Port.inRange(21,21) ))] exists```","Azure Network Security Group allows all traffic on FTP (TCP Port 21) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on FTP (TCP Port 21). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict FTP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where api.name = 'aws-waf-classic-web-acl-resource' AND json.rule = resources.apiGateway[*] exists or resources.applicationLoadBalancer[*] exists```,"AWS WAF Classic (Regional) in use This policy identifies AWS Classic WAF which is in use. As a best practice, create the AWS WAFv2 and configure accordingly to protect against application-layer attacks. The block criteria in the WAFv2 web access control list (web ACL) has more capabilities than the Classic WAF to filter-out malicious traffic. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To migrate a web ACL from AWS WAF Classic to AWS WAF, follow below URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/waf-migrating-procedure.html." "```config from cloud.resource where api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals ""0.0.0.0/0"" and direction equals ""inbound"" and ( protocol equals ""all"" or ( protocol equals ""tcp"" and (( port_max greater than 22 and port_min less than 22 ) or ( port_max equals 22 and port_min equals 22 )))))] exists as X; config from cloud.resource where api.name = 'ibm-vpc' as Y; filter ' $.X.id equals $.Y.default_security_group.id '; show X;```","IBM Cloud Default Security Group allow all traffic on SSH port (22) This policy identifies IBM Cloud Default Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. A VPC comes with a default security group whose initial configuration allows access from all members that are attached to this security group. If you do not specify a security group when you launch a Virtual Server, the Virtual Server is automatically assigned to this default security group. As a result, the Virtual Server will be having risk of uncontrolled connectivity. It is recommended that the Default Security Group allows network ports, protocols, and services listening on a system with validated business needs that are running on each system. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 22 (or range containing 22)\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = state contains running and metadataOptions.httpEndpoint equals enabled and metadataOptions.httpTokens does not contain required```,"AWS EC2 instance not configured with Instance Metadata Service v2 (IMDSv2) This policy identifies AWS instances that are not configured with Instance Metadata Service v2 (IMDSv2). With IMDSv2, every request is now protected by session authentication. IMDSv2 protects against misconfigured-open website application firewalls, misconfigured-open reverse proxies, unpatched SSRF vulnerabilities, and misconfigured-open layer-3 firewalls and network address translation. It is recommended to use only IMDSv2 for all your EC2 instances. For more details:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from follwoing URL\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html\n\nNOTE: Make a precaution before you enforce the use of IMDSv2, as applications or agents that use IMDSv1 for instance metadata access will break.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains CreateCustomerGateway and $.X.filterPattern contains DeleteCustomerGateway and $.X.filterPattern contains AttachInternetGateway and $.X.filterPattern contains CreateInternetGateway and $.X.filterPattern contains DeleteInternetGateway and $.X.filterPattern contains DetachInternetGateway) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for Network gateways changes This policy identifies the AWS regions which do not have a log metric filter and alarm for Network gateways changes. Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. It is recommended that a metric filter and alarm be established for changes to network gateways. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and Condition does not exist)] exists```,"AWS SNS topic is exposed to unauthorized access This policy identifies AWS SNS topics that are exposed to unauthorized access. Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#ensure-topics-not-publicly-accessible This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can access the topic. OR Make 'Principal' restrictive so that only limited resources allowed.\n9. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dataproc-clusters-list' AND json.rule = config.encryptionConfig.gcePdKmsKeyName does not exist and config.encryptionConfig.kmsKey does not exist```,"GCP Dataproc Cluster not configured with Customer-Managed Encryption Key (CMEK) This policy identifies Dataproc Clusters that are not configured with CMEK. Dataproc cluster and job data are stored on persistent disks associated with the Compute Engine VMs in the cluster as well as in a Cloud Storage staging bucket. As a security best practice use of CMEK to encrypt this data on persistent disk and bucket is advisable and provides more control to the user. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Currently, it is not possible to update the encryption key for a GCP Dataproc Cluster. It is recommended to create a new cluster with appropriate CMEK and migrate all workloads from the old cluster to the new cluster.\n\nTo configure encryption key for GCP Dataproc Cluster while creation, please refer to the URL given below:\nhttps://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption#use_cmek_with_cluster_data." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cognito-identity-pool' AND json.rule = allowUnauthenticatedIdentities is true```,"Copy of AWS Cognito identity pool allows unauthenticated guest access This policy identifies AWS Cognito identity pools that allow unauthenticated guest access. AWS Cognito identity pools unauthenticated guest access and allows unauthenticated users to assume a role in your AWS account. These unauthenticated users will be granted permissions of the assumed role which may have more privileges than that are intended. This could lead to unauthorized access or data leakage. It is recommended to disable unauthenticated guest access for the Cognito identity pools. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To deactivate guest access in an identity pool,\n1. Log in to AWS console\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to the Amazon Cognito dashboard\n4. Under '''Identity pools''' section, select the reported identity pool\n5. In '''User access''' tab, under '''Guest access''' section\n6. Click on '''Deactivate''' button to deactivate the guest access configured.\n\nNOTE: Before you deactivate unauthenticated guest access, it is must to have at-least one authenticated access configured in your identity pool.." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="" ) and ( $.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="" ) and $.X.filter contains ""gce_route"" and ( $.X.filter contains ""protoPayload.methodName:"" or $.X.filter contains ""protoPayload.methodName :"" ) and ( $.X.filter does not contain ""protoPayload.methodName!:"" and $.X.filter does not contain ""protoPayload.methodName !:"" ) and $.X.filter contains ""compute.routes.delete"" and $.X.filter contains ""compute.routes.insert""'; show X; count(X) less than 1```","bobby remediation 1 This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: ddddd." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and engine equals postgres and engineVersion is member of ('13.2','13.1','12.6','12.5','12.4','12.3','12.2','11.11','11.10','11.9','11.8','11.7','11.6','11.5','11.4','11.3','11.2','11.1','10.16','10.15','10.14','10.13','10.12','10.11','10.10','10.9','10.7','10.6','10.5','10.4','10.3','10.1','9.6.21','9.6.20','9.6.19','9.6.18','9.6.17','9.6.16','9.6.15','9.6.14','9.6.12','9.6.11','9.6.10','9.6.9','9.6.8','9.6.6','9.6.5','9.6.3','9.6.2','9.6.1','9.5','9.4','9.3')```","AWS RDS PostgreSQL exposed to local file read vulnerability This policy identifies AWS RDS PostgreSQL which are exposed to local file read vulnerability. AWS RDS PostgreSQL installed with vulnerable 'log_fdw' extension is exposed to local file read vulnerability, due to which attacker could gain access to local system files of the database instance within their account, including a file which contained credentials specific to PostgreSQL. It is highly recommended to upgrade AWS RDS PostgreSQL to the latest version. For more information, https://aws.amazon.com/security/security-bulletins/AWS-2022-004/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Amazon has deprecated affected versions of RDS for PostgreSQL and customers can no longer create new instances with the affected versions.\n\nTo upgrade the latest version of Amazon RDS for PostgreSQL, please follow below URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html\n." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING and $.X.status.state does not contain TERMINATED and $.X.status.state does not contain TERMINATED_WITH_ERRORS) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration does not exist)' ; show X;```,"AWS EMR cluster is not enabled with local disk encryption This policy identifies AWS EMR clusters that are not enabled with local disk encryption. Applications using the local file system on each cluster instance for intermediate data throughout workloads, where data could be spilled to disk when it overflows memory. With Local disk encryption at place, data at rest can be protected. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Local disk encryption', check the box 'Enable at-rest encryption for local disks'.\n8. Select the appropriate Key provider type from the 'Key provider type' dropdown list.\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once the new cluster is set up verify its working and terminate the source cluster.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-users' AND json.rule = groupList is empty```,"AWS IAM user is not a member of any IAM group This policy identifies an AWS IAM user as not being a member of any IAM group. It is generally a best practice to assign IAM users to at least one IAM group. If the IAM users are not in a group, it complicates permission management and auditing, increasing the risk of privilege mismanagement and security oversights. It also leads to higher operational overhead and potential non-compliance with security best practices. It is recommended to ensure all IAM users are part of at least one IAM group according to your business requirement to simplify permission management, enforce consistent security policies, and reduce the risk of privilege mismanagement. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To add a user to an IAM user group (console)\n\n1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/\n2. In the navigation pane, choose 'Users' under the 'Access management' section and then choose the name of the user that is reported\n3. Choose the 'Groups' tab and then choose 'Add user to groups'. \n4. Select the check box next to the groups under 'Group Name' according to your requirements.\n5. Choose 'Add user to group(s)'.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' and api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = bucket.logging.targetBucket does not exist```,"Alibaba Cloud OSS bucket logging not enabled This policy identifies Alibaba Cloud Object Storage Service (OSS) buckets that do not have logging enabled. Enabling logging for OSS buckets helps capture access and operation events, which are critical for security monitoring, troubleshooting, and auditing. Without logging, you lack visibility into who accesses and interacts with your bucket, potentially missing unauthorized access or suspicious behaviour. As a security best practice, it is recommended to enable logging for OSS buckets. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Navigate to Object Storage Service\n3. In the bucket-list pane, click on a reported OSS bucket\n4. Under Log, click configure\n5. Configure bucket logging\n6. Click the Enabled checkbox\n7. Select Target Bucket from list\n8. Enter a Target Prefix\n9. Click Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = versioningConfiguration.status equals Enabled and (versioningConfiguration.mfaDeleteEnabled does not exist or versioningConfiguration.mfaDeleteEnabled is false) AND (bucketLifecycleConfiguration does not exist or bucketLifecycleConfiguration.rules[*].status equals Disabled)```,"AWS S3 bucket is not configured with MFA Delete This policy identifies the S3 buckets which do not have Multi-Factor Authentication(MFA) enabled to delete S3 object version. Enabling MFA Delete on a versioned bucket adds another layer of protection. In order to permanently delete an object version or suspend or reactivate versioning on the bucket valid code from the account's MFA device required. Note: MFA Delete only works for CLI or API interaction, not in the AWS Management Console. Also, you cannot make version DELETE actions with MFA using IAM user credentials. You must use your root AWS account. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: Using console you can enable versioning on the bucket but you cannot enable MFA delete.\nYou can do it via only with the AWS CLI:\naws s3api put-bucket-versioning --bucket --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "" ""\n\nNOTE: The bucket owner, the AWS account that created the bucket (root account), and all authorized IAM users can enable versioning, but only the bucket owner (root account) can enable MFA Delete. Successful execution will enable the S3 bucket versioning and MFA delete on the bucket.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case ""normal"" and serviceEndpoints.publicServiceEndpointEnabled is true```","IBM Cloud Kubernetes clusters are accessible by using public endpoint This policy identifies IBM Cloud kubernetes clusters which has public service endpoint enabled. If any cluster has public service endpoint enabled, cluster will be accessible on an Internet routable IP address. It is recommended that public service endpoint is disabled and use private service endpoint instead for better security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Kubernetes' and then 'Clusters'\n3. Select the 'Clusters' reported in the alert\n4. Under 'Overview' tab and then 'Networking' section, click the 'Disable' radio button for the public service endpoint.\n6. In the next screen, click 'Disable' to confirm.\n7. In the next screen, click Refresh to initiate an API server refresh.\n5. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = 'authTokens[?any(lifecycleState equals ACTIVE and (_DateTime.ageInDays(timeCreated) > 90))] exists'```,"OCI users Auth Tokens have aged more than 90 days without being rotated This policy identifies all of your IAM User Auth Tokens which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI Auth Tokens access directly or via SDKs or OCI CLI. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Select Identity & Security from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on Auth Tokens in the lower left-hand corner of the page.\n6. Delete any auth token with a date of 90 days or older under the Created column of the Auth Tokens.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-user' AND json.rule = userType equals Guest as X; config from cloud.resource where api.name = 'azure-role-assignment' AND json.rule = properties.principalType contains User and properties.roleDefinition.properties.roleName is member of (""Owner"") as Y; filter '$.X.id equals $.Y.properties.principalId'; show X;```","Custom Azure Guest User with owner permissions This policy identifies Azure Guest users with owner permissions to the subscription. Removing external users with owner permissions to your subscriptions prevents unmonitored and unwanted access to your subscription. It is recommended to remove guest users' owner permissions from the subscription. Refer to below link for more details: https://learn.microsoft.com/en-us/azure/active-directory/enterprise-users/users-restrict-guest-permissions This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To restirct Azure Guest user access follow below URL:\nhttps://learn.microsoft.com/en-us/azure/active-directory/enterprise-users/users-restrict-guest-permissions." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'access_key_1_active is true and access_key_2_active is true'```,"AWS IAM user has two active Access Keys This policy identifies IAM users who have two active Access Keys. Each IAM user can have up to two Access Keys, having two Keys instead of one can lead to increased chances of accidental exposure. So it needs to be ensured that unused Access Keys are deleted. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console and navigate to the 'IAM' service.\n2. Click on Users in the navigation pane.\n3. For the identified IAM user which has two active Access Keys, based on policies of your company, take appropriate action.\n4. Create another IAM user with the specific objective performed by the 2nd Access Key.\n5. Delete one of the unused Access Keys.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.uniformBucketLevelAccess.enabled contains false```,"Copy of a Copy Maybe GCP cloud storage bucket with uniform bucket-level access disabled This policy identifies GCP storage buckets for which the uniform bucket-level access is disabled. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. log in to GCP Console\n2. Navigate to 'Storage'\n3. Click on 'Browser' to get the list of storage buckets\n4. Search for the alerted bucket and click on the bucket name\n5. From the top menu go to 'PERMISSION' tab\n6. Under the section 'Access control' click on 'SWITCH TO UNIFORM'\n7. On the pop-up window select 'uniform'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'nodePools[*].config.serviceAccount contains default'```,"GCP Kubernetes Engine Cluster Nodes have default Service account for Project access This policy identifies Kubernetes Engine Cluster Nodes which have default Service account for Project access. By default, Kubernetes Engine nodes are given the Compute Engine default service account. This account has broad access and more permissions than are required to run your Kubernetes Engine cluster. You should create and use a least privileged service account to run your Kubernetes Engine cluster instead of using the Compute Engine default service account. If you are not creating a separate service account for your nodes, you should limit the scopes of the node service account to reduce the possibility of a privilege escalation in an attack. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Service account can be chosen only at the time of creation of clusters. So to fix this alert, create a new cluster with the least privileged Service account and then migrate all required cluster node data from the reported cluster to this new cluster.\nTo create the cluster with new Service account which has privileges as you needed, perform following steps:\n1. Login to GCP Portal\n2. Click on 'CREATE CLUSTER'\n3. Choose required name/value for cluster fields\n4. Click on 'More'\n5. Choose 'Service account' which has the least privilege under Project access section, Instead of default 'Compute Engine default service account'\nNOTE: The Compute Engine default service account by default, has devstorage.read_only, logging.write, monitoring, service.management.readonly, servicecontrol, and trace.append privileges/scopes.\nYou can configure a service account with more restrictive privileges and assign the same.\n6. Click on 'Create'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.registryId))] exists```,"AWS ECR private repository with cross-account access This policy identifies AWS ECR private repository that are configured with cross-account access. An ECR repository is a storage location within Amazon Elastic Container Registry (ECR) where Docker container images are stored and managed. Granting cross-account access to an ECR repository risks unauthorized access and data exposure, requiring strict policy controls and monitoring. It is recommended to implement strict access controls and allow only trusted entities to access to an ECR repository to mitigate security risks. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict the access to AWS ECR private repository policy, perform the following actions:\n \n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'ECR' dashboard from the 'Services' dropdown\n4. In the navigation pane, choose 'Repositories'\n5. On the Repositories page, Select the repository for which the alert is being generated\n6. From the repository image list view, in the navigation pane, choose 'Permissions' from 'Actions' dropdown, and Edit.\n7. On the Edit permissions page, Click on 'Edit policy JSON' to modify the JSON so that Principal is restrictive\n7a. Remove the statements that grant access to actions to other AWS accounts\n or\n 7b. Remove the permitted actions from the statements\n8. After modifications, click on 'Save'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals KeyVaults and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for Key Vault This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Key Vault is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Key Vault. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Key Vault' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="") and ($.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="") and $.X.filter contains ""gce_firewall_rule"" and ($.X.filter contains ""jsonPayload.event_subtype="" or $.X.filter contains ""jsonPayload.event_subtype ="") and ($.X.filter does not contain ""jsonPayload.event_subtype!="" and $.X.filter does not contain ""jsonPayload.event_subtype !="") and $.X.filter contains ""compute.firewalls.patch"" and $.X.filter contains ""compute.firewalls.insert""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for VPC Network Firewall rule changes This policy identifies the GCP accounts which do not have a log metric filter and alert for VPC Network Firewall rule changes. Monitoring for Create or Update firewall rule events gives insight network access changes and may reduce the time it takes to detect suspicious activity. It is recommended to create a metric filter and alarm to detect VPC Network Firewall rule changes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gce_firewall_rule"" AND jsonPayload.event_subtype=""compute.firewalls.patch"" OR jsonPayload.event_subtype=""compute.firewalls.insert""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." ```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[*] is empty or ipPermissionsEgress[*] is empty as Y; filter '$.X.securityGroups[*] contains $.Y.groupId'; show X;```,"AWS Elastic Load Balancer v2 (ELBv2) load balancer with invalid security groups This policy identifies Elastic Load Balancer v2 (ELBv2) load balancers that do not have security groups with a valid inbound or outbound rule. A security group with no inbound/outbound rule will deny all incoming/outgoing requests. ELBv2 security groups should have at least one inbound and outbound rule, ELBv2 with no inbound/outbound permissions will deny all traffic incoming/outgoing to/from any resources configured behind that ELBv2; in other words, the ELBv2 is useless without inbound and outbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on each security group, it will open Security Group properties in a new tab in your browser.\n6. For to check the Inbound rule, Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELBv2 functional requirement.\n8. For to check the Outbound rule, Click on the 'Outbound Rules'\n9. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELBv2 functional requirement.\n10. Click on 'Save'." ```config from cloud.resource where cloud.account = 'AWS Account' AND api.name = 'aws-ec2-describe-instances' AND json.rule = instanceId exists```,"nsk_config_ec2 This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (backupRetentionPeriod does not exist or backupRetentionPeriod less than 7)```,"AWS RDS retention policy less than 7 days RDS Retention Policies for Backups are an important part of your DR/BCP strategy. Recovering data from catastrophic failures, malicious attacks, or corruption often requires a several day window of potentially good backup material to leverage. As such, the best practice is to ensure your RDS clusters are retaining at least 7 days of backups, if not more (up to a maximum of 35). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Configure your RDS backup retention policy to at least 7 days.\n\n1. Go to the AWS console RDS dashboard.\n2. In the navigation pane, choose Instances.\n3. Select the database instance you wish to configure.\n4. Click on 'Modify'.\n5. Scroll down to Additional Configuration and set the retention period to at least 7 days under 'Backup retention period'.\n6. Click Continue.\n7. Under 'Scheduling of modifications' choose 'When to apply modifications'\n8. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-target-https-proxies' AND json.rule = 'sslPolicy does not exist or sslPolicy is empty'```,"GCP Load balancer HTTPS target proxy configured with default SSL policy instead of custom SSL policy This policy identifies Load balancer HTTPS target proxies which are configured with default SSL Policy instead of custom SSL policy. It is a best practice to use custom SSL policy to access load balancers. It gives you closer control over SSL/TLS versions and ciphers. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. For 'SSL policy', choose any custom SSL policy other than 'GCP default'\n11. Click on 'Done'\n12. Click on 'Update'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = properties.publicNetworkAccess does not exist or properties.publicNetworkAccess is true```,"Azure Automation account configured with overly permissive network access This policy identifies Automation accounts configured with overly permissive network access. It is recommended to configure the Automation account with private endpoints so that the Automation account is accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Automation Account dashboard \n3. Click on the reported Automation account\n4. Under the 'Account Settings' menu, click on 'Networking'\n5. In 'Public access' tab, select 'Disable' for 'Public network access' \n6. In 'Private access' tab, Create a private endpoint with required parameters \n7. Click on 'Apply'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and httpsTrigger exists and httpsTrigger.securityLevel does not equal SECURE_ALWAYS```,"GCP Cloud Function HTTP trigger is not secured This policy identifies GCP Cloud Functions for which the HTTP trigger is not secured. When you configure HTTP functions to be triggered only with HTTPS, user requests will be redirected to use the HTTPS protocol, which is more secure. It is recommended to set the 'Require HTTPS' for configuring HTTP triggers while deploying your function. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Under section 'Trigger', click on 'EDIT'\n6. Select the checkbox against the field 'Require HTTPS'\n7. Click on 'SAVE'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Udp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```,"Azure Network Security Group having Inbound rule overly permissive to all traffic on UDP protocol This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on UDP protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with ""gke-"") and metadata.items[?any(key contains ""serial-port-logging-enable"" and value equals ""true"")] exists```","GCP VM instance serial port output logging is enabled This policy identifies GCP VM instances that have serial port output logging enabled. The serial console feature in the VM instance does not support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. When the serial port output logging feature is enabled, the serial port output is retained even after an instance is stopped or deleted. It is recommended to disable serial port access and serial port output logging for all VM instances to avoid leakage of potentially sensitive data. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable serial port output logging on existing GCP VM instance, follow the below URL:\nhttps://cloud.google.com/compute/docs/troubleshooting/viewing-serial-port-output#enable-stackdriver." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration exists) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode contains SSE) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode does not contain KMS)' ; show X;```,"AWS EMR cluster is not configured with SSE KMS for data at rest encryption (Amazon S3 with EMRFS) This policy identifies EMR clusters which are not configured with Server Side Encryption(SSE KMS) for data at rest encryption of Amazon S3 with EMRFS. As a best practice, use SSE-KMS for server side encryption to encrypt the data in your EMR cluster and ensure full control over your data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. For encryption At Rest click the checkbox for 'Enable at-rest encryption for EMRFS data in Amazon S3'\n8. From the dropdown 'Default encryption mode' select 'SSE-KMS'. Follow below link for configuration steps.\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.." ```config from cloud.resource where api.name = 'aws-appsync-graphql-api' AND json.rule = wafWebAclArn is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.wafWebAclArn'; show X;```,"AWS AppSync attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AppSync attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, AppSync attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the AppSync console\n3. Click on the reported AppSync\n4. Choose 'Settings' in the navigation pane\n5. In the Web application firewall section, note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-rds-instance' AND json.rule = 'Items[*].securityIPList contains 0.0.0.0/0 or Items[*].securityIPList contains 127.0.0.1'```,"Alibaba Cloud ApsaraDB RDS allowlist group is not restrictive This policy identifies ApsaraDB for Relational Database Service (RDS) allowlist groups which are not restrictive. The value 0.0.0.0/0 indicates that all devices can access the RDS instance and The value 127.0.0.1 is the default IP address means that no devices can access the RDS instance. As a best practice, It is recommended that you periodically check and adjust your allowlists to maintain RDS security. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to ApsaraDB for RDS\n3. In the left-side navigation pane, click on 'Instances' \n4. Choose the reported instance, click on 'Manage'\n5. In the left-side navigation pane, click on 'Data Security'\n6. In the 'Data Security' section, click 'Edit' on the allow list setting which has IP address 127.0.0.1 or 0.0.0.0/0 and update the restrictive IP address in the box as per your requirement. \n7. Click on 'Ok'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals SqlServerVirtualMachines and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for SQL servers on machines This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for SQL servers on machines is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for SQL servers on machines. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'SQL servers on machines' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-volumes' as Y; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Z; filter ""$.X.blockDeviceMappings[*].ebs.volumeId == $.Y.volumeId and $.Y.encrypted contains true and $.Y.kmsKeyId equals $.Z.key.keyArn and $.Z.keyMetadata.keyManager contains AWS and $.X.tags[?(@.key=='Name')].value does not contain CSR""; show Y; ```","Morgan_Stanley_custom_policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.endpointProtectionMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud endpoint protection monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have endpoint protection monitoring set to disabled. Enabling endpoint Protection will make sure that any issues or shortcomings in endpoint protection for all Microsoft Windows virtual machines are identified so that they can, in turn, be removed. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Monitor missing Endpoint Protection in Azure Security Center' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' AND json.rule = 'cloudWatchLogsRoleArn equals null or cloudWatchLogsRoleArn does not exist'```,"AWS CloudTrail trail logs is not integrated with CloudWatch Log This policy identifies AWS CloudTrail which has trail logs that are not integrated with CloudWatch Log. Enabling the CloudTrail trail logs integrated with CloudWatch Logs will enable the real-time as well as historic activity logging. This will further improve monitoring and alarm capability. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Admin Console and access the CloudTrail service.\n2. Click on the Trails in the left hand menu.\n3. Click on the identified CloudTrail and navigate to the 'CloudWatch Logs' section.\n4. Click on 'Configure' tab and provide required\n5. Provide a log group name in field 'New or existing log group'\n6. Click on 'Continue'\n7. In the next page from 'IAM role' dropdown select an IAM role with required access or select the 'Create a new IAM role'\n8. Click on 'Allow'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = secrets[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is false```,"Azure Key Vault secret has no expiration date (Non-RBAC Key vault) This policy identifies Azure Key Vault secrets that do not have an expiry date for the Non-RBAC Key vaults. As a best practice, set an expiration date for each secret and rotate the secret regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].name' | xargs -I {} az keyvault set-policy --name {} --certificate-permissions list listissuers --key-permissions list --secret-permissions list --spn This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Key vaults'\n3. Select the Key vault instance where the secrets are stored\n4. Select 'Secrets', and select the secret that you need to modify\n5. Select the current version\n6. Set the expiration date\n7. 'Save' your changes." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireUppercaseCharacters does not exist or requireUppercaseCharacters is false'```,"Alibaba Cloud RAM password policy does not have an uppercase character This policy identifies Alibaba Cloud accounts that do not have an uppercase character in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Upper-Case Letter'\n6. Click on 'OK'\n7. Click on 'Close'." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="") and ($.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="") and $.X.filter contains ""gce_network"" and ($.X.filter contains ""jsonPayload.event_subtype="" or $.X.filter contains ""jsonPayload.event_subtype ="") and ($.X.filter does not contain ""jsonPayload.event_subtype!="" and $.X.filter does not contain ""jsonPayload.event_subtype !="") and $.X.filter contains ""compute.networks.insert"" and $.X.filter contains ""compute.networks.patch"" and $.X.filter contains ""compute.networks.delete"" and $.X.filter contains ""compute.networks.removePeering"" and $.X.filter contains ""compute.networks.addPeering""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for VPC network changes This policy identifies the GCP account which does not have a log metric filter and alert for VPC network changes. Monitoring network insertion, patching, deletion, removePeering and addPeering activities will help in identifying VPC traffic flow is not getting impacted. It is recommended to create a metric filter and alarm to detect activities related to the insertion, patching, deletion, removePeering and addPeering of VPC network. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gce_network"" AND jsonPayload.event_subtype=""compute.networks.insert"" OR jsonPayload.event_subtype=""compute.networks.patch"" OR jsonPayload.event_subtype=""compute.networks.delete"" OR jsonPayload.event_subtype=""compute.networks.removePeering"" OR jsonPayload.event_subtype=""compute.networks.addPeering""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = KmsMasterKeyId exists and KmsMasterKeyId equal ignore case ""alias/aws/sns""```","AWS SNS Topic not encrypted by Customer Managed Key (CMK) This policy identifies AWS SNS Topics that are not encrypted by Customer Managed Key (CMK). AWS SNS Topics are used to send notifications to subscribers and might contain sensitive information. SNS Topics are encrypted by default by a AWS managed key but users can specify CMK to get enhanced security, control over the encryption key and also comply with any regulatory requirements. As a security best practice use of CMK to encrypt your SNS Topics is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the Amazon SNS Dashboard\n4. Click on 'Topics'\n5. Click on the reported Topic\n6. Click on 'Edit' button from the console top menu to access the topic configuration settings.\n7. Select the 'Encryption – optional', Ensure that Enable encryption option is selected.\n8. Select the 'AWS KMS key' from the box other than default '(Default) alias/aws/sns' key based on your business requirement.\n9. Choose 'Save changes' to apply the configuration changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.ipRangeFilter is empty```,"Azure Cosmos DB IP range filter not configured This policy identifies Azure Cosmos DB with IP range filter not configured. Azure Cosmos DB should be restricted access from All Networks. It is recommended to add defined set of IP / IP range which can access Azure Cosmos DB from the Internet. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Azure Cosmos DB\n3. Select the reported Cosmos DB resource \n4. Click on 'Firewall and virtual networks' under 'Settings'\n5. Click on 'Selected networks' radio button\n6. Under 'Firewall' add IP ranges\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule = 'disabled is false and direction equals INGRESS and allowed[*] exists and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and targetTags[*] does not exist and targetServiceAccounts[*] does not exist'```,"GCP Firewall rule allows inbound traffic from anywhere with no specific target set This policy identifies GCP Firewall rules which allow inbound traffic from anywhere with no target filtering. The default target is all instances in the network. The use of target tags or target service accounts allows the rule to apply to select instances. Not using any firewall rule filtering may allow a bad actor to brute force their way into the system and potentially get access to the entire network. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the instructions below to restrict the default target parameter (all instances in the network):\n\n1. Login to GCP Console.\n2. Go to VPC Network.\n3. Go to the Firewall rules.\n4. Click on each Firewall rule reported.\n5. Click Edit.\n6. Change the Targets field from 'All instances in the network' to 'Specified target tags' or 'Specified service account'.\n7. Type the target tag/target service account into the Target tags/Target service account field respectively.\n8. Review Source IP ranges and change to specific IP ranges if traffic is not required to be allowed from anywhere.\n9. Click Save.\n\nReference:\nhttps://cloud.google.com/vpc/docs/add-remove-network-tags." ```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 90) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 90) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)))'```,"Informational - AWS access keys not used for more than 90 days This policy identifies IAM users for which access keys are not used for more than 90 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 90 days, then that access key needs to be deleted (even though the access key is inactive). This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-secretsmanager-secret' AND json.rule = replication.userManaged.replicas[*].customerManagedEncryption.kmsKeyName does not exist and replication.automatic.customerManagedEncryption.kmsKeyName does not exist```,"GCP Secrets Manager secret not encrypted with CMEK This policy identifies GCP Secret Manager secrets that are not encrypted with a Customer-Managed Encryption Key (CMEK). GCP Secret Manager securely stores and manages access to API keys, passwords, certificates, and other sensitive information. Using CMEK for secrets gives you complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in regulated industries. It is recommended to encrypt Secret Manager secrets with a Customer-Managed Encryption Key (CMEK) for enhanced data control and compliance. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Secrets Manager' page\n2. Under 'Secrets', click on the reported secret\n3. Select 'EDIT SECRET' on the top navigation bar\n4. Under the 'Edit secret' page, under 'Encryption', select the 'Customer-managed encryption key (CMEK)' radio button and Select a CMEK key for each location\n5. Click on 'UPDATE SECRET'.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and ($.X.filterPattern contains ""eventSource="" or $.X.filterPattern contains ""eventSource ="") and ($.X.filterPattern does not contain ""eventSource!="" and $.X.filterPattern does not contain ""eventSource !="") and $.X.filterPattern contains s3.amazonaws.com and $.X.filterPattern contains PutBucketAcl and $.X.filterPattern contains PutBucketPolicy and $.X.filterPattern contains PutBucketCors and $.X.filterPattern contains PutBucketLifecycle and $.X.filterPattern contains PutBucketReplication and $.X.filterPattern contains DeleteBucketPolicy and $.X.filterPattern contains DeleteBucketCors and $.X.filterPattern contains DeleteBucketLifecycle and $.X.filterPattern contains DeleteBucketReplication) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for S3 bucket policy changes This policy identifies the AWS regions which do not have a log metric filter and alarm for S3 bucket policy changes. Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'automaticFailover equals disabled or automaticFailover does not exist'```,"AWS ElastiCache Redis cluster with Multi-AZ Automatic Failover feature set to disabled This policy identifies ElastiCache Redis clusters which have Multi-AZ Automatic Failover feature set to disabled. It is recommended to enable the Multi-AZ Automatic Failover feature for your Redis Cache cluster, which will improve primary node reachability by providing read replica in case of network connectivity loss or loss of availability in the primary's availability zone for read/write operations. Note: Redis cluster Multi-AZ with automatic failover does not support T1 and T2 cache node types and is only available if the cluster has at least one read replica. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box,\na. Set 'Multi-AZ' to 'Yes'\nb. Select 'Apply Immediately' checkbox, to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\nc. Click on 'Modify'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = properties.status equals ""Active"" and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)```","Azure Service bus namespace not configured with Azure Active Directory (Azure AD) authentication This policy identifies Service bus namespaces that are not configured with Azure Active Directory (Azure AD) authentication and are enabled with local authentication. Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there's no need to store the tokens in your code and risk potential security vulnerabilities. It is recommended to configure the Service bus namespaces with Azure AD authentication so that all actions are strongly authenticated. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Azure Active Directory (Azure AD) authentication and disable local authentication on existing Service bus, follow below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/disable-local-authentication." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ""((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))"" as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter'$.X.bucketName equals $.Y.s3BucketName'; show X;```","Copy of AWS CloudTrail bucket is publicly accessible This policy identifies publicly accessible S3 buckets that store CloudTrail data. These buckets contains sensitive audit data and only authorized users and applications should have access. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save\n6. If 'Bucket Policy' is set to public follow below steps\na. Under 'Bucket Policy', modify the policy to remove public access\nb. Click on Save\nc. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains DeleteGroupPolicy and $.X.filterPattern contains DeleteRolePolicy and $.X.filterPattern contains DeleteUserPolicy and $.X.filterPattern contains PutGroupPolicy and $.X.filterPattern contains PutRolePolicy and $.X.filterPattern contains PutUserPolicy and $.X.filterPattern contains CreatePolicy and $.X.filterPattern contains DeletePolicy and $.X.filterPattern contains CreatePolicyVersion and $.X.filterPattern contains DeletePolicyVersion and $.X.filterPattern contains AttachRolePolicy and $.X.filterPattern contains DetachRolePolicy and $.X.filterPattern contains AttachUserPolicy and $.X.filterPattern contains DetachUserPolicy and $.X.filterPattern contains AttachGroupPolicy and $.X.filterPattern contains DetachGroupPolicy) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for IAM policy changes This policy identifies the AWS regions which do not have a log metric filter and alarm for IAM policy changes. Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(user does not contain appspot.gserviceaccount.com and user does not contain developer.gserviceaccount.com and user does not contain cloudservices.gserviceaccount.com and user does not contain system.gserviceaccount.com and user does not contain cloudbuild.gserviceaccount.com) and (roles contains roles/editor or roles contains roles/owner)'```,"GCP IAM primitive roles are in use This policy identifies GCP IAM users assigned with primitive roles. Primitive roles are Roles that existed prior to Cloud IAM. Primitive roles (owner, editor) are built-in and provide a broader access to resources making them prone to attacks and privilege escalation. Predefined roles provide more granular controls than primitive roles and therefore Predefined roles should be used. Note: For a new GCP project, service accounts are assigned with role/editor permissions. GCP recommends not to revoke the permissions on the SA account. Reference: https://cloud.google.com/iam/docs/service-accounts Limitation: This policy alerts for Service agents which are Google-managed service accounts. Service Agents are by default assigned with some roles by Google cloud and these roles shouldn't be revoked. Reference: https://cloud.google.com/iam/docs/service-agents In case any specific service agent needs to be bypassed, this policy can be cloned and modified accordingly This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: Review the projects / resources that have Primitive roles assigned to them and replace them with equivalent Predefined roles.\nNote: This policy alerts for Service agents which are Google-managed service accounts. Service Agents are by default assigned with some roles by Google cloud and these roles shouldn't be revoked.\nReference: https://cloud.google.com/iam/docs/service-agents\nDo not revoke the roles that are granted to service agents. If you revoke these roles, some Google Cloud services will no longer work.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-credential-user-registration-details' AND json.rule = isMfaRegistered is false as X; config from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = accountEnabled is true as Y; filter '$.X.userDisplayName equals $.Y.displayName'; show X;```,"Custom AlertRule Azure AD MFA is not enabled for the user This policy identifies Azure users for whom AD MFA (Active Directory Multi-Factor Authentication) is not enabled. Azure AD MFA is a simple best practice that adds an extra layer of protection on top of your user name and password. MFA provides increased security for your Azure account settings and resources. Enabling Azure AD Multi-Factor Authentication using Conditional Access policies is the recommended approach to protect users. For more details: https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To enable per-user Azure AD Multi-Factor Authentication; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```,"AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = publicNetworkAccess equal ignore case Enabled and networkAccessPolicy equal ignore case AllowAll and managedBy contains virtualMachines```,"Azure VM disk configured with overly permissive network access This policy identifies Azure Virtual Machine disks that are configured with overly permissive network access. Enabling public network access provides overly permissive network access on Azure Virtual Machine disks, increasing the risk of unauthorized access and potential security breaches. Public network access exposes sensitive data to external threats, which attackers could exploit to compromise VM disks. Disabling public access and using Azure Private Link reduces exposure, ensuring only trusted networks have access and enhancing the security of your Azure environment by minimizing the risk of data leaks and breaches. As a security best practice, it is recommended to disable public network access for Azure Virtual Machine disks. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Disks'\n3. Click on the reported disk\n4. Under 'Settings', go to 'Networking'\n5. Ensure that Network access is NOT set to 'Enable public access from all networks'\n6. Click 'Save'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""databases-for-postgresql"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resourceGroupId"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Databases for PostgreSQL service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for 'Databases for PostgreSQL' service. Service ID has full platform control as an administrator for 'Databases for PostgreSQL' service, including the ability to assign other users access policies and modify deployment passwords. If a Service ID with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'Databases for PostgreSQL' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(137,137) or destinationPortRanges[*] contains _Port.inRange(137,137) ))] exists```","Azure Network Security Group allows all traffic on NetBIOS (UDP Port 137) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on NetBIOS (UDP Port 137). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict NetBIOS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.networkAcls.defaultAction does not exist or properties.networkAcls.defaultAction equal ignore case Allow)```,"Azure Cognitive Services account configured with public network access This policy identifies Azure Cognitive Services accounts configured with public network access. Overly permissive public network access allows access to resource through the internet using a public IP address. It is recommended to restrict IP ranges to allow access to your cognitive Services account and endpoint from specific public internet IP address ranges and is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restirct internet IP ranges on your existing Cognitive Services account, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-virtual-networks?tabs=portal." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowCrossTenantReplication exists and properties.allowCrossTenantReplication is true```,"Azure Storage account with cross tenant replication enabled This policy identifies Azure Storage accounts that are enabled with cross tenant replication. Azure Storage account cross tenant replication allows data to be replicated across multiple Azure tenants. Though this feature is beneficial for data availability it also poses a significant security risk if not properly managed. Possible risks include unauthorized access to data, data leaks, and compliance violations. Disabling Cross Tenant Replication reduces the risk of unauthorized data access and prevents the accidental sharing of sensitive information. As best practice, it is recommended to disable cross tenant replication on your storage accounts. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage Account dashboard\n3. Click on the reported Storage Account\n4. Under 'Data management', select 'Object replication'\n5. Select 'Advanced settings'\n6. Uncheck 'Allow cross-tenant replication'\n7. Click on 'OK'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case access and roles[?any( role_id is member of (crn:v1:bluemix:public:iam::::role:Administrator,crn:v1:bluemix:public:iam::::role:Editor,crn:v1:bluemix:public:iam::::role:Viewer ) )] exists and resources[?any( attributes[?any( value equal ignore case support and operator is member of (stringEquals, stringMatch))] exists)] exists and subjects[?any( attributes[?any( value contains AccessGroupId)] exists )] exists as X; count(X) less than 1```","IBM Cloud Support Access Group to manage incidents has not been created This policy identifies IBM Cloud accounts with no access group to manage support incidents. Support cases are used to raise issues with IBM Cloud. Users with access to the IBM Cloud Support Center can create and/or manage support tickets based on their IAM role. Support Center access should be managed and assigned using Access Groups. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. In the IBM Cloud console, Under the 'Manage' dropdown click on 'Access (IAM)', and then select Access Groups.\n2. Select 'Create Access Group'.\n3. Give the Access Group a descriptive name, for example, Support Center Viewers or Support Center Admins.\n4. Optionally, provide a brief description.\n5. Click 'Create'.\n6. Once the Access Group is created, click on the 'Access' tab.\n7. Click 'Assign Access'. Under the 'Service' section search for 'Support Center' and select.\n8. Under 'Resources' select All Resources.\n9. Select the Support Center role(s) higher than the viewer.\n10. Click add.\n11. Click Assign.\n12. Click on the 'Users' tab.\n13. Click Add users\n14. Select users from the list and click 'Add to group'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(25,25) or destinationPortRanges[*] contains _Port.inRange(25,25) ))] exists```","Azure Network Security Group allows all traffic on SMTP (TCP Port 25) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SMTP (TCP Port 25). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SMTP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy ajtmu This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-queue-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;```,"Azure Storage Logging is not Enabled for Queue Service for Read Write and Delete requests This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'oci-networking-subnet' as X; config from cloud.resource where api.name = 'oci-logging-logs' AND json.rule = lifecycleState equals ACTIVE and isEnabled is true and configuration.source.service contains flowlogs as Y; filter 'not ($.X.id contains $.Y.configuration.source.resource)'; show X;```,"OCI VCN subnet flow logging is disabled This policy identifies Virtual Cloud Network (VCN) subnets that have flow logs disabled. Enabling VCN flow logs enables you to monitor traffic flowing within your virtual network and can be used to detect anomalous traffic. Without the flow logs turned on, it is not possible to get any visibility into network traffic. It is recommended to enable a VCN flow log on each of your VCN subnets. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure VCN flow log for reported subnet, follow below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/Network/Tasks/vcn-flow-logs-enable.htm#vcn-flow-logs-enable." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (engine does not contain aurora and engine does not contain sqlserver and engine does not contain docdb) and (multiAZ is false or multiAZ does not exist)```,"AWS RDS instance with Multi-Availability Zone disabled This policy identifies RDS instances which have Multi-Availability Zone(Multi-AZ) disabled. When RDS DB instance is enabled with Multi-AZ, RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different availability zone. These Multi-AZ deployments will improve primary node reachability by providing read replica in case of network connectivity loss or loss of availability in the primary’s availability zone for read/write operations, so by making them the best fit for production database workloads. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose Instances, and then select the reported DB instance\n5. Click on 'Modify'\n6. In 'Availability & durability' section for the 'Multi-AZ Deployment', select 'Create a standby instance'\n7. Click on 'Continue'\n8. Under 'Scheduling of modifications' choose 'When to apply modifications'\n9. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'enableKubernetesAlpha is true'```,"GCP Kubernetes Engine Clusters have Alpha cluster feature enabled This policy identifies GCP Kubernetes Engine Clusters which have enabled alpha cluster. It is recommended to not use alpha clusters or alpha features for production workloads. Alpha clusters expire after 30 days and do not receive security updates. This cluster will not be covered by the Kubernetes Engine SLA. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Engine Clusters alpha feature cannot be disabled once it is created. So to resolve this alert, create a new cluster with the alpha feature disabled, then migrate all required cluster data from the reported cluster to this newly created cluster and delete reported Kubernetes engine cluster.\n\nTo create new Kubernetes engine cluster with the alpha feature disabled, perform the following: \n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Set new cluster parameters as per your requirement and make sure 'Enable Kubernetes alpha features in this cluster' is unchecked.\n6. Click on Save\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-compute-firewall-rules-list' AND json.rule = 'disabled is false and (name equals default-allow-ssh or name equals default-allow-icmp or name equals default-allow-internal or name equals default-allow-rdp) and (deleted is false) and (sourceRanges[*] contains 0.0.0.0/0 or sourceRanges[*] contains ::/0)'```,"GCP Default Firewall rule is overly permissive (except http and https) This policy identifies the Firewall rules that are configured with default firewall rule. The default Firewall rules will apply to all instances by default in the absence of specific custom rules with higher priority. It is a safe practice to not have these rules in the default Firewall. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to 'VPC network' Under service 'NETWORKING'\n3. Click on section 'Firewall' on left panel\n4. For 'default' rule, apply filter 'Name : default-',\n5. select all the rules which start with 'default-' (except http, https) and click on 'DELETE' icon\n6. On pop-up window, click on 'DELETE'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = kind starts with app and (identity.type does not exist or (identity.type exists and identity.type does not contain SystemAssigned and identity.type does not contain UserAssigned))```,"Azure App Service Web app doesn't have a Managed Service Identity This policy identifies Azure App Services that are not configured with managed service identity. Managed Service Identity in App Service makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need for username and passwords. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure managed service identity on your reported App Service, follow the below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = 'keyMetadata.origin contains EXTERNAL and keyMetadata.keyManager contains CUSTOMER and keyMetadata.enabled is true and (_DateTime.ageInDays($.keyMetadata.validTo) > -30)'```,"AWS KMS customer managed external key expiring in 30 days or less This policy identifies KMS customer managed external keys which are expiring in 30 days or less. As a best practice, it is recommended to reimport the same key material and specifying a new expiration date. If the key material expires, AWS KMS deletes the key material and the customer managed external key becomes unusable. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS) Dashboard\n4. Click on Customer managed keys (Left Panel)\n5. Click on reported KMS Customer managed key\n6. Under 'Key material' section, Delete the existing key material before you reimport the key material by clicking on 'Delete key material'\n7. Click on 'Upload key material'\n8. Under 'Encrypted key material and import token' section, Reimport same encrypted key material and import token\n9. Under 'Expiration option', Select 'Key material expires' and choose new expiration date in 'Key material expires at' date box\n10. Click on 'Upload key material' button\nNOTE: Deleting key material makes all data encrypted under the customer master key (CMK) unrecoverable unless you later import the same key material into the CMK. The CMK is not affected by this operation.." ```config from cloud.resource where api.name = 'aws-rds-db-cluster' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '($.X.storageEncrypted is true) and ($.X.kmsKeyId equals $.Y.key.keyArn) and ($.Y.keyMetadata.keyManager does not contain CUSTOMER)' ; show X;```,"AWS RDS DB cluster is encrypted using default KMS key instead of CMK This policy identifies RDS DB(Relational Database Service Database) clusters which are encrypted using default KMS key instead of CMK (Customer Master Key). As a security best practice CMK should be used instead of default KMS key for encryption to gain the ability to rotate the key according to your own policies, delete the key, and control access to the key via KMS policies and IAM policies. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: RDS DB clusters can be encrypted only while creating the database cluster. You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS encryption key when you restore from the unencrypted DB cluster snapshot.\n\nStep 1: To create a 'Snapshot' of the unencrypted DB cluster,\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available'.\n\nStep 2: Follow the below link to restoring the Cluster from a DB Cluster Snapshot,\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RestoreFromSnapshot.html\n\nOnce the DB cluster is restored and verified, follow below steps to delete the reported DB cluster,\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'RDS' dashboard from 'Services' dropdown\n4. In the navigation pane, choose ‘Databases’\n5. In the list of DB instances, choose a writer instance for the DB cluster\n6. Choose 'Actions', and then choose 'Delete'\nFMI:\n1. While deleting a RDS DB cluster, customer has to disable 'Enable deletion protection' otherwise instance cannot be deleted\n2. While deleting RDS DB instance , AWS application will ask the end user to take Final snapshot\n3. If a RDS DB cluster has a writer role instance, then User has to delete the write instance to delete the main cluster (Delete option won’t be enabled for main RDS DB cluster)." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = firewallRules.value[*].properties.startIpAddress equals ""0.0.0.0"" or firewallRules.value[*].properties.endIpAddress equals ""0.0.0.0""```","EIP-CSE-IACOHP-AzurePostgreSQL-NetworkAccessibility-eca1500-51 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false as X; config from cloud.resource where api.name = 'aws-route53-query-logging-config' as Y; filter ' not ($.X.hostedZone.id equals $.Y.HostedZoneId) ' ; show X;```,"AWS Route53 public Hosted Zone query logging is not enabled This policy identifies the AWS Route53 public hosted zones DNS query logging is not enabled. Enabling DNS query logging for an AWS Route 53 hosted zone enhances DNS security and compliance by providing visibility into DNS queries. When enabled, Route 53 sends these log files to Amazon CloudWatch Logs. Disabling DNS query logging for AWS Route 53 limits visibility into DNS traffic, hampering anomaly detection, compliance efforts, and effective incident response. It is recommended to enable logging for all public hosted zones to enhance the visibility and compliance requirements. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure logging for DNS queries for the Hosted zone, perform the following actions:\n\n1. Sign in to the AWS Management Console and open the Route 53 console\n2. In the navigation pane, choose 'Hosted zones'\n3. Choose the hosted zone that is reported\n4. In the Hosted zone details pane, choose 'Configure query logging'\n5. Choose an existing log group or create a new log group from the 'Log group' section drop-down\n6. Choose 'Permissions - optional' to see a table that shows whether the resource policy matches the CloudWatch log group, and whether Route 53 has permission to publish logs to CloudWatch\n7. Choose 'Create'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(53,53) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on DNS port (53) This policy identifies GCP Firewall rules which allow all inbound traffic on DNS port (53). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the DNS port (53) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." "```config from cloud.resource where api.name = 'aws-account-management-alternate-contact' group by account as X; filter ' AlternateContactType is not member of (""SECURITY"") ' ;```","mnm test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-acm-pca-certificate-authority' AND json.rule = Type equal ignore case ROOT and Status equal ignore case active```,"AWS Private CA root certificate authority is enabled This policy identifies enabled AWS Private CA root certificate authorities. AWS Private CA enables creating a root CA to issue private certificates for securing internal resources like servers, applications, users, devices, and containers. The root CA should be disabled for daily tasks to minimize risk, as it should only issue certificates for intermediate CAs, allowing it to remain secure while intermediate CAs handle the issuance of end-entity certificates. It is recommended to disable the AWS Private CA root certificate authority to secure. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the status of the Private CA root certificate authority:\n\n1. Sign in to your AWS account and open the AWS Private CA console\n2. On the 'Private certificate authorities' page, choose the reported private CA\n3. On the 'Actions' menu, choose 'Disable' to disable the private CA.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'MFADevice is empty'```,"Alibaba Cloud MFA is disabled for RAM user This policy identifies Resource Access Management (RAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your Alibaba Cloud account settings and resources. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Select the reported user\n5. In the 'Authentication' tab, Click on 'Modify Logon Settings'\n6. Choose the 'Required' radio button for 'Enable MFA' \n7. Click on 'OK'\n8. Click on 'Close'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-group' AND json.rule = mail contains 42```,"dnd_test_create_hyperion_policy_multi_cloud_child_policies_ss_finding_2 Description-4ee38fa0-9684-4c83-b917-035b88e2e243 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = versioning equals Disabled```,"OCI Object Storage Bucket has object Versioning disabled This policy identifies the OCI Object Storage buckets that are not configured with a Object Versioning. It is recommended that Object Storage buckets should be configured with Object Versioning to minimize data loss because of inadvertent deletes by an authorized user or malicious deletes. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Next to Object Versioning, click Edit.\n5. In the dialog box, Clink Enable Versioing (to enable).." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case AgentlessVmScanning AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)```,"Azure Microsoft Defender for Cloud set to Off for Agentless scanning for machines This policy identifies Azure Microsoft Defender for Cloud where the Agentless scanning for machines is set to Off. Agentless scanning uses disk snapshots to detect installed software, vulnerabilities, and plain text secrets without needing agents on each machine. When disabled, your environment risks exposure to software vulnerabilities and unauthorized software, diminishing visibility into security issues. Enabling Agentless scanning improves security by identifying vulnerabilities and sensitive data with minimal performance impact, streamlining management and ensuring strong threat detection and compliance. As a security best practice, it is recommended to enable Agentless scanning for machines in Azure Microsoft Defender for Cloud. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'Agentless scanning for machines' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverAdmins does not exist or serverAdmins[*] size equals 0 or (serverAdmins[*].properties.administratorType exists and serverAdmins[*].properties.administratorType does not equal ActiveDirectory and serverAdmins[*].properties.login is not empty)```,"Dikla test This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = deployment.platform_options.disk_encryption_key_crn is empty```,"IBM Cloud PostgreSQL Database disk encryption is not enabled with customer managed keys This policy identifies IBM Cloud PostgreSQL Databases with default disk encryption. Using customer managed keys will increase significant control where keys are managed by customers. It is recommended to use customer managed keys for disk encryption which provides customer control over the lifecycle of the keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: PostgreSQL database disk encryption can be enabled with Customer managed keys only at the time of\ncreation.\n\nPlease use below link to provide PostgreSQL service to KMS service authorization if not authorized already;\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-key-protect&interface=ui#granting-service-auth\n\nPlease use below link to provision a KMS instance with a key to use for encryption if not provisioned:\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial#create-keys\n\nPlease follow below steps to create a new PostgreSQL deployment from backup of vulnerable PostgreSQL deployment:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select PostgreSQL database reported in the alert.\n3. In the left navigation pane, navigate to 'Backups and restore', under 'Available Backups' section click on 'Create backup' to get latest backup of the database.\n4. Under 'Available Backups' tab, click on three dots on the right corner of a row containing latest backup and click on 'Restore backup'.\n5. On create a new Database for PostgreSQL from backup page, select all the configuration as per the requirement.\n6. Under 'Encryption' section, under 'KMS Instance' please select a KMS instance and a key from the instance to use for encryption.\n7. Click on 'Restore backup'.\n\nPlease follow below steps to delete the reported database deployment :\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list'.\n3. Select your deployment. Next, by using the stacked three-dot menu icon , choose Delete from the drop list.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' as X; config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' as Y; filter ' $.X.resourcesVpcConfig.vpcId contains $.Y.vpcId and $.Y.isDefault is true'; show X;```,"AWS EKS cluster using the default VPC This policy identifies AWS EKS clusters which are configured with the default VPC. It is recommended to use a VPC configuration based on your security and networking requirements. You should create your own EKS VPC instead of using the default, so that you can have full control over the cluster network. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: An AWS EKS cluster VPC cannot be changed once it is created. To resolve this alert, create a new cluster with the custom VPC as per your requirements, then migrate all required cluster data from the reported cluster to this newly created cluster and delete the reported Kubernetes cluster.\n\n1. Open the Amazon EKS dashboard.\n2. Choose Create cluster.\n3. On the Create cluster page, fill in the following fields:\n\n- Cluster name\n- Kubernetes version\n- Role name\n- VPC - Choose your new custom VPC.\n- Subnets\n- Security Groups\n- Endpoint private access\n- Endpoint public access\n- Logging\n\n4. Choose Create.." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains ""protoPayload.methodName="" or $.X.filter contains ""protoPayload.methodName ="") and ($.X.filter does not contain ""protoPayload.methodName!="" and $.X.filter does not contain ""protoPayload.methodName !="") and $.X.filter contains ""cloudsql.instances.update""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for SQL instance configuration changes This policy identifies the GCP account which does not have a log metric filter and alert for SQL instance configuration changes. Monitoring SQL instance configuration activities will help in reducing time to detect and correct misconfigurations done on sql server. It is recommended to create a metric filter and alarm to detect activities related to the SQL instance configuration. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nprotoPayload.methodName=""cloudsql.instances.update""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.enableRBAC is false```,"Azure AKS enable role-based access control (RBAC) not enforced To provide granular filtering of the actions that users can perform, Kubernetes uses role-based access controls (RBAC). This control mechanism lets you assign users, or groups of users, permission to do things like create or modify resources, or view logs from running application workloads. These permissions can be scoped to a single namespace, or granted across the entire AKS cluster. This policy checks your AKS cluster RBAC setting and alerts if disabled. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a new AKS cluster with RBAC enabled, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac#create-a-new-cluster-using-azure-rbac-and-managed-azure-ad-integration." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Tcp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```,"Azure Network Security Group having Inbound rule overly permissive to all traffic on TCP protocol This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on TCP protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-attached-user-policies' AND json.rule = attachedPolicies[*].policyArn contains ""arn:aws:iam::aws:policy/AmazonElasticTranscoderFullAccess""```","AWS IAM deprecated managed policies in use by User This policy checks for any usage of deprecated AWS IAM managed policies and returns an alert if it finds one in your cloud resources. When AWS deprecate an IAM managed policy, a new alternative is released with improved access restrictions. Existing IAM users and roles can continue to use the previous policy without interruption, however new IAM users and roles will use the new replacement policy. Before you migrate any user or role to the new replacement policy, we recommend you review their differences in the Policy section of AWS IAM console. If you require one or more of the removed permissions, please add them separately to any user or role. List of deprecated AWS IAM managed policies: AmazonElasticTranscoderFullAccess (replaced by AmazonElasticTranscoder_FullAccess) This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: 1. Go to the AWS console IAM dashboard.\n2. Click Policies on the left navigation menu.\n3. Enter the deprecated IAM policy name into the filter.\n4. Click on the policy name.\n5. Select the Policy usage tab.\n6. Check all attached users, make note of them, then select Detach.\n7. Click Policies on the left navigation menu.\n8. Enter the new IAM policy name into the filter.\n9. Click on the policy name.\n10. Select the Policy usage tab.\n11. Select Attach and check all the users you made a note of.\n12. Click Attach policy.." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains ""aws:kms"" or sseAlgorithm contains ""aws:kms:dsse"") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and policies.default.Statement[?any((Principal.AWS equals * or Principal equals *)and Condition does not exist)] exists as Y; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn' ; show X;```","AWS S3 bucket encrypted using Customer Managed Key (CMK) with overly permissive policy This policy identifies Amazon S3 buckets that use Customer Managed Keys (CMKs) for encryption that have a key policy overly permissive. Amazon S3 bucket encryption key overly permissive can result in the exposure of sensitive data and potential compliance violations. As a security best practice, It is recommended to follow the principle of least privilege ensuring that the KMS key policy does not have all the permissions to be able to complete a malicious action. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to add changes to existing key policy of the KMS key used by the S3 bucket\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Click on the 'Key policy' tab on the navigated KMS key window.\n6. Click on 'Edit'.\n7. Replace the 'Everyone' grantee (i.e. '*') from the Principal element value with an AWS account ID or an AWS account ARN.\n OR \nAdd a Condition clause to the existing policy statement so that the KMS key is restricted.\n8. Click on 'Save Changes'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.publicNetworkAccess equal ignore case Enabled and firewallRules.value[*].properties.startIpAddress equals ""0.0.0.0"" and firewallRules.value[*].properties.endIpAddress equals ""0.0.0.0""```","Azure PostgreSQL Database Server 'Allow access to Azure services' enabled This policy identifies Azure PostgreSQL Database Server which has 'Allow access to Azure services' settings enabled. When 'Allow access to Azure services' settings is enabled, PostgreSQL Database server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Select the reported PostgreSQL server\n4. Go to 'Connection security' under 'Settings'\n5. Select 'No' for 'Allow access to Azure services' under 'Firewall rules'\n6. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.storageEncrypted is true and $.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;```,"AWS RDS database not encrypted using Customer Managed Key This policy identifies RDS databases that are encrypted with default KMS keys and not with customer managed keys. As a best practice, use customer managed keys to encrypt the data on your RDS databases and maintain control of your keys and data on sensitive workloads. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Because you can set AWS RDS database encryption only during database creation, the process for resolving this alert requires you to create a new RDS database with a customer managed key for encryption, migrate the data from the reported database to this newly created database, and delete the RDS database identified in the alert.\n\nTo create a new RDS database with encryption using a customer managed key:\n1. Log in to the AWS console.\n2. Select the region for which the alert was generated.\n3. Navigate to the Amazon RDS Dashboard.\n4. Select 'Create database'.\n5. On the 'Select engine' page, select 'Engine options' and 'Next'.\n6. On the 'Choose use case' page, select 'Use case' of database and 'Next'.\n7. On the 'Specify DB details' page, specify the database details you need and click 'Next'.\nNote: Amazon RDS encryption has some limitation on region and type instances. For Availability of Amazon RDS Encryption refer to: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Availability\n8. On the 'Configure advanced settings' page, Under 'Encryption', select 'Enable encryption' and select the customer managed key [i.e. Other than (default)aws/rds] from 'Master key' dropdown list].\n9. Select 'Create database'.\n\nTo delete the RDS database that uses the default KMS keys, which triggered the alert:\n1. Log in to the AWS console\n2. Select the region for which the alert was generated.\n3. Navigate to the Amazon RDS Dashboard.\n4. Click on Instances, and select the reported RDS database.\n5. Select the 'Instance actions' drop-down and click 'Delete'.\n7. In the 'Delete' dialog, select the 'Create final snapshot?' checkbox, if you want a backup. Provide a name for the final snapshot, confirm deletion and select 'Delete'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals ""ACTIVE"" AND containerDefinitions[*].readonlyRootFilesystem any false or containerDefinitions[*].readonlyRootFilesystem does not exist```","AWS ECS task definition is not configured with read-only access to container root filesystems This policy identifies the AWS Elastic Container Service (ECS) task definitions with readonlyRootFilesystem parameter set to false or if the parameter does not exist in the container definition within the task definition. ECS root filesystem is the base filesystem that containers run on, providing the necessary environment and isolation for the containerized application. If a containerized application is compromised, it could enable an attacker to alter the root file system of the host machine, thus compromising the entire system or application. This could lead to significant data loss, system crashes, or a broader security breach. It is recommended to limit all ECS containers to have read-only access on ECS task definition to limit the potential impact of a compromised container. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To limit ECS task definitions to read-only access to root filesystems, perform the following actions:\n\n1. Sign into the AWS console and navigate to the Amazon ECS console\n2. In the navigation pane, choose 'Task definitions'\n3. Choose the task definition that is reported\n4. Select 'Create new revision', and then click on 'Create new revision'\n5. On the 'Create new task definition revision' page, select the container with Read-only root file system disabled\n6. Under the 'Read-only root file system' section, enable 'Read only'\n7. Specify the remaining configuration as per the requirements\n8. Choose 'Create'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true and resourcesVpcConfig.publicAccessCidrs contains ""0.0.0.0/0""```","AWS EKS cluster public endpoint access overly permissive to all traffic This policy identifies EKS clusters that have an overly permissive public endpoint accessible to all traffic. When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint accepts all connections from public internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). Allowing all traffic to EKS cluster may allow a bad actor to brute force their way into the system and potentially get access to the entire network. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Either disable public access to your API server so that it's not accessible from the internet and allow only private access, or restrict traffic solely from known static IP addresses.\n\nFor more details on Amazon EKS cluster endpoint access control, follow below mentioned URL:\nhttps://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changenetworksecuritygroupcompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createnetworksecuritygroup and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletenetworksecuritygroup and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatenetworksecuritygroup) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for Network Security Groups changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Network Security Group (NSG) changes. Monitoring and alerting on changes to security groups will help in identifying changes to traffic flowing between Virtual Network Cards attached to Compute instances. It is recommended that an Event Rule and Notification be configured to catch changes made to Network Security Groups. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Network Security Group – Change Compartment, Network Security Group – Create, Network Security Group - Delete and Network Security Group – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains CreateRoute and $.X.filterPattern contains CreateRouteTable and $.X.filterPattern contains ReplaceRoute and $.X.filterPattern contains ReplaceRouteTableAssociation and $.X.filterPattern contains DeleteRouteTable and $.X.filterPattern contains DeleteRoute and $.X.filterPattern contains DisassociateRouteTable) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for Route table changes This policy identifies the AWS regions which do not have a log metric filter and alarm for Route table changes. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path. It is recommended that a metric filter and alarm be established for changes to route tables. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/write"" as X; count(X) less than 1```","Azure Activity log alert for Create or update network security group does not exist This policy identifies the Azure accounts in which activity log alert for Create or update network security group does not exist. Creating an activity log alert for Create or update network security group gives insight into network access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Network Security Group (Microsoft.Network/networkSecurityGroups)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-image' AND json.rule = status equals ""available"" and encryption equal ignore case ""none""```","IBM Cloud Virtual Server Image for Virtual Private Cloud (VPC) using basic Provider Managed Encryption This policy identifies IBM Cloud Virtual Server Images for Virtual Private Cloud (VPC) which are not provisioned with Customer Managed Encryption and are using the basic Provider Managed Encryption. With customer-managed encryption, one can import own root keys to the cloud. This process is commonly called ""bring your own key"". When the encryption is managed by a cloud service provider, the image may still be vulnerable to unauthorized user access and manipulation. Customer-managed encryption (Key Protect & Hyper Protect Crypto Service) provides better audit records for root key usage, therefore it is recommended to use Customer Managed Encrypted Images. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: The encryption type of the Image cannot be changed once set. If the image's encryption type is set to default (Provider Managed Encryption), Then the image must be deleted and created once again with Customer Managed Encryption\nTo safely delete the image which has default Provider Managed encryption, follow these steps:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then select 'Images'\n3. Select the 'Image Name' reported in the alert\n4. Click on the 'Actions' dropdown\n5. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals arbitrary and (expiration_date does not exist or (_DateTime.ageInDays(expiration_date) > -1))'```,"IBM Cloud Secrets Manager has expired arbitrary secrets This policy identifies IBM Cloud Secrets Manager arbitrary secret which is expired. Arbitrary secrets should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all arbitrary secrets are set with expiration date and expired secrets should be regularly rotated. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: If the IBM Cloud Secrets Manager Arbitrary secret is expired, the secret needs to be deleted.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-delete-secrets&interface=ui#delete-secret-ui\n\nIf the IBM Cloud Secrets Manager Arbitrary secret is about to be expired, the secret has to be rotated.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-manual-rotation&interface=ui#manual-rotate-arbitrary-ui\n\nMake sure to set an expiration date for each secret.\nPlease follow the below steps to set an expiration date:\n1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list', from the list of resources select secret manager instance in which the reported secret resides, under security section.\n3. Select the secret.\n4. Under 'Expiration date' section, provide expiration date as required.\n6. Click on 'Update'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-logging-bucket' AND json.rule = name contains ""pk""```","pk-gcp-global This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus contains FAILURE'```,"AWS Config fails to deliver log files This policy identifies AWS Configs which are failing to deliver its log files to the specified S3 bucket. It happens when it doesn't have sufficient permissions to complete the operation. To deliver information to S3 bucket, AWS Config needs to assume an IAM role that manages the permissions required to access the designated S3 bucket. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the AWS Config Dashboard\n4. Go to 'Settings' (Left Pane)\n5. In 'AWS Config role' section, 'Choose a role from your account' option and provide a unique name for new IAM role for the 'Role name' box; which does have permission to access S3 bucket.\n6. Click Save." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-subnets-list' AND json.rule = purpose is not member of (REGIONAL_MANAGED_PROXY, PRIVATE_SERVICE_CONNECT, GLOBAL_MANAGED_PROXY, PRIVATE_NAT) and (privateIpGoogleAccess does not exist or privateIpGoogleAccess is false)```","GCP VPC Network subnets have Private Google access disabled This policy identifies GCP VPC Network subnets have disabled Private Google access. Private Google access enables virtual machine instances on a subnet to reach Google APIs and services using an internal IP address rather than an external IP address. Internal (private) IP addresses are internal to Google Cloud Platform and are not routable or reachable over the Internet. You can use Private Google access to allow VMs without Internet access to reach Google APIs, services, and properties that are accessible over HTTP/HTTPS. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to VPC network (Left Panel)\n3. Select VPC networks\n4. Click on the name of a reported subnet, The 'Subnet details' page will be displayed\n5. Click on 'EDIT' button\n6. Set 'Private Google access' to 'On'\n7. Click on 'Save'\n\nFor more information, refer: https://cloud.google.com/vpc/docs/configure-private-google-access#enabling-pga." "```config from cloud.resource where api.name = 'aws-glue-datacatalog' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.DataCatalogEncryptionSettings.EncryptionAtRest.CatalogEncryptionMode equals ""DISABLED"" or $.X.ConnectionPasswordEncryption.ReturnConnectionPasswordEncrypted equals ""false"") or ($.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId exists and ($.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId equals $.Y.keyMetadata.arn or $.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId starts with ""alias/aws/"")) or ($.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId exists and ($.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId equals $.Y.keyMetadata.arn or $.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId starts with ""alias/aws/""))' ; show X;```","AWS Glue Data Catalog not encrypted by Customer Managed Key (CMK) This policy identifies AWS Glue Data Catalog that is encrypted using the default KMS key instead of CMK (Customer Managed Key) or using the CMK that is disabled. AWS Glue Data Catalog is a managed metadata repository centralizing schema information for AWS Glue resources, facilitating data discovery and management. To protect sensitive data from unauthorized access, users can specify CMK to get enhanced security, and control over the encryption key and comply with any regulatory requirements. It is recommended to use a CMK to encrypt the AWS Glue Data Catalog as it provides complete control over the encrypted data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable the encryption for Glue data catalog\n1. Sign in to the AWS Management Console, Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the 'Find Services' search box, type 'Glue' and select 'AWS Glue' from the search results.\n4. Choose the 'Data Catalog' dropdown in the navigation pane and select 'Catalog settings'.\n5. On the 'Data catalog settings' page, select the 'Metadata encryption' check box, and choose an AWS KMS CMK key that you are managing according to your business requirements.\nNote: When you use a customer managed key to encrypt your Data Catalog, the Data Catalog provides an option to register an IAM role to encrypt and decrypt resources. You need to grant your IAM role permissions that AWS Glue can assume on your behalf. This includes AWS KMS permissions to encrypt and decrypt data.\n6. To enable an IAM role that AWS Glue can assume to encrypt and decrypt data on your behalf, select the 'Delegate KMS operations to an IAM role' option.\n7. Select an IAM role equipped with the necessary permissions to conduct the required KMS operations for AWS Glue to assume.\n8. To Encrypt connection passwords, select 'Encrypt connection passwords', and choose an AWS KMS CMK key that you are managing according to your business requirements.\n9. And click 'save'.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_checkpoints')] does not exist or settings.databaseFlags[?(@.name=='log_checkpoints')].value equals off)""```","GCP PostgreSQL instance with log_checkpoints database flag is disabled This policy identifies PostgreSQL instances in which log_checkpoints database flag is not set. Enabling the log_checkpoints database flag would enable logging of checkpoints and restart points to the server log. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Select the PostgreSQL instance for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. Click 'Add item', choose the flag 'log_checkpoints' from the drop-down menu and set the value to 'on'\nOR\nIf 'log_checkpoints' database flag is already set to 'off', from the drop-down menu set the value to 'on'\n7. Click on 'Save'." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains ConsoleLogin and ($.X.filterPattern contains ""errorMessage="" or $.X.filterPattern contains ""errorMessage ="") and ($.X.filterPattern does not contain ""errorMessage!="" and $.X.filterPattern does not contain ""errorMessage !="") and $.X.filterPattern contains ""Failed authentication"") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for AWS management console authentication failures This policy identifies the AWS accounts which do not have a log metric filter and alarm for AWS management console authentication failures. Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation. It is recommended that a metric filter and alarm be established for failed console authentication attempts. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events and is not set with specific log metric filter and alarm in your account. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi trail enabled with all Management Events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = ConsoleLogin) && ($.errorMessage = ""Failed authentication"") }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1, specify metric details and conditions details as required and click on 'Next'\n - In Step 2, Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3, Select name and description to alarm and click on 'Next'\n - In Step 4, Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.nodePools[*].management.autoUpgrade is true and $.currentNodeCount less than 3```,"GCP Kubernetes cluster size contains less than 3 nodes with auto upgrade enabled Ensure your Kubernetes cluster size contains 3 or more nodes. (Clusters smaller than 3 may experience downtime during upgrades.) This policy checks the size of your cluster pools and alerts if there are fewer than 3 nodes in a pool. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Resize your cluster.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. In the Node pools section, expand the disclosure arrow for the node pool you want to change, and change the value of the Current size field to the desired value, then click Save.\n4. Repeat for each node pool as needed.\n5. Click Save to exit the cluster modification screen.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(21,21) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on FTP port (21) This policy identifies GCP Firewall rules which allow all inbound traffic on FTP port (21). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the FTP port (21) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = kmsKeyId is member of (""null"")```","OCI Object Storage Bucket is not encrypted with a Customer Managed Key (CMK) This policy identifies the OCI Object Storage buckets that are not encrypted with a Customer Managed Key (CMK). It is recommended that Object Storage buckets should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the bucket. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(80,80)""```","Alibaba Cloud Security group allow internet traffic to HTTP port (80) This policy identifies Security groups that allow inbound traffic on HTTP port (80) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 80, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'atRestEncryptionEnabled is false or atRestEncryptionEnabled does not exist'```,"AWS ElastiCache Redis cluster with encryption for data at rest disabled This policy identifies ElastiCache Redis clusters which have encryption for data at rest(at-rest) is disabled. It is highly recommended to implement at-rest encryption in order to prevent unauthorized users from reading sensitive data saved to persistent media available on your Redis clusters and their associated cache storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster at-rest encryption can be set only at the time of the creation of the cluster. So to fix this alert, create a new cluster with at-rest encryption, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with at-rest encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption at-rest' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = '(_DateTime.ageInDays(apiKeys[*].timeCreated) > 90)'```,"OCI users API keys have aged more than 90 days without being rotated This policy identifies all of your IAM API keys which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI API access directly or via SDKs or OCI CLI. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on API Keys in the lower left hand corner of the page.\n6. Delete any API Keys with a date of 90 days or older under the Created column of the API Key table.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/policies/write"" as X; count(X) less than 1```","Azure Activity log alert for Update security policy does not exist This policy identifies the Azure accounts in which activity log alert for Update security policy does not exist. Creating an activity log alert for Update security policy gives insight into changes to security policy and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Update security policy (Microsoft.Security/policies)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-route53-list-hosted-zones' AND json.rule = 'hostedZone.config.privateZone is false and resourceRecordSet[*].type any equal A and (resourceRecordSet[*].resourceRecords[*].value any start with 10. or resourceRecordSet[*].resourceRecords[*].value any start with _IPAddress.inRange(""172.%d"",16,31) or resourceRecordSet[*].resourceRecords[*].value any start with 192.168.)'```","AWS Route53 Public Zone with Private Records A hosted zone is a container for records (An object in a hosted zone that you use to define how you want to route traffic for the domain or a subdomain), which include information about how you want to route traffic for a domain (such as example.com) and all of its subdomains (such as www.example.com, retail.example.com, and seattle.accounting.example.com). A hosted zone has the same name as the corresponding domain. A public hosted zone is a container that holds information about how you want to route traffic on the internet for a specific domain.It is best practice to avoid AWS Route 53 Public Hosted Zones containing DNS records for private IPs or resources within your AWS account to overcome information leakage of your internal network and resources. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You can not convert a public hosted zone into a private hosted zone. So, it is recommended to create and configure a Private Hosted Zone to manage private IPs within your Virtual Private Cloud (VPC) as Amazon Route 53 service will only return your private DNS records when queried from within the associated VPC, and delete the associated public hosted zone once the Private hosted zone is configured with all the records.\nTo create a private hosted zone using the Route 53 console:\n1.For each VPC that you want to associate with the Route 53 hosted zone, change the following VPC settings to true:\n 'enableDnsHostnames'\n 'enableDnsSupport'\nFor more information, see Updating DNS Support (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html#vpc-dns-updating) for Your VPC in the Amazon VPC User Guide.\n2. Sign in to the AWS console\n3. Go to Route 53 console\n4. If you are new to Route 53, choose Get Started Now under DNS Management. If you are already using Route 53, choose Hosted Zones in the navigation pane.\n5. Choose 'Create Hosted Zone'\n6. In the Create Private Hosted Zone pane, enter a domain name and, optionally, a comment.\nFor information about how to specify characters other than a-z, 0-9, and - (hyphen) and how to specify internationalized domain names, see DNS Domain Name Format (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DomainNameFormat.html).\n7. In the Type list, choose Private Hosted Zone for Amazon VPC\n8. In the VPC ID list, choose the VPC that you want to associate with the hosted zone. If you want to associate more than one VPC with the hosted zone, you can add VPCs after you create the hosted zone.\nNote: If the console displays the following message, you are trying to associate a VPC with this hosted zone that has already been associated with another hosted zone that has an overlapping namespace, such as example.com and retail.example.com:\n'A conflicting domain is already associated with the given VPC or Delegation Set.'\n9. Choose Create\n10. To associate more VPCs with the new hosted zone, perform the following steps:\n a. Choose Back to Hosted Zones.\n b. Choose the radio button for the hosted zone.\n c. In the right pane, in VPC ID, choose another VPC that you want to associate with the hosted zone.\n d. Choose Associate New VPC.\n e. Repeat steps c and d until you have associated all of the VPCs that you want to with the hosted zone.\nFor More Information : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html\n\nTo delete a public hosted zone using the Route 53 console:\n1. Sign into the AWS console\n2. Go Route53 console\n3. Confirm that the hosted zone that you want to delete contains only an NS and an SOA record. If it contains additional records, delete them:\n a. Choose the name of the hosted zone that you want to delete.\n b. On the Record Sets page, if the list of records includes any records for which the value of the Type column is something other than NS or SOA, choose the row, and choose Delete Record Set. To select multiple, consecutive records, choose the first row, press and hold the Shift key, and choose the last row. To select multiple, non-consecutive records, choose the first row, press and hold the Ctrl key, and choose the remaining rows. Note: If you created any NS records for subdomains in the hosted zone, delete those records, too.\n c. Choose Back to Hosted Zones\n4. On the Hosted Zones page, choose the row for the hosted zone that you want to delete.\n5. Choose Delete Hosted Zone.\n6. Choose OK to confirm.\n7. If you want to make the domain unavailable on the internet, we recommend that you transfer DNS service to a free DNS service and then delete the Route 53 hosted zone. This prevents future DNS queries from possibly being misrouted. If the domain is registered with Route 53, see Adding or Changing Name Servers and Glue Records for a Domain (https://docs.aws.amazon.com/Route53/latest DeveloperGuide/domain-name-servers-glue-records.html) for information about how to replace Route 53 nameservers with name servers for the new DNS service. If the domain is registered with another registrar, use the method provided by the registrar to change name servers for the domain.\nFor More Information : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DeleteHostedZone.html." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case ""service"" and name equal ignore case ""serviceType"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name equal ignore case ""region"")] does not exist)] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for all Identity and Access enabled services This policy identifies IBM Cloud Service ID, which has administrator role permission across 'All Identity and Access enabled services'. Service IDs with administrator permission on 'All Identity and Access enabled services' can access all services or resources in the account. If a Service ID with administrator privileges becomes compromised, it may result in compromised resources in the account. As a security best practice, granting the least privilege access, such as granting only the permissions required to perform a task instead of providing excessive permissions, is recommended. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID that is reported and that you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'All Identity and Access enabled services' \n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = containerDefinitions[*].user exists and containerDefinitions[*].user contains root```,"AWS ECS Fargate task definition root user found This policy identifies AWS ECS Fargate task definition which has user name as root. As a best practice, the user name to use inside the container should not be root. Note: This parameter is not supported for Windows containers. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: Create a task definition revision.\n\n1. Open the Amazon ECS console.\n2. From the navigation bar, choose the region that contains your task definition.\n3. In the navigation pane, choose Task Definitions.\n4. On the Task Definitions page, select the box to the left of the task definition to revise and choose Create new revision.\n5. On the Create new revision of Task Definition page, change the existing Container Definitions.\n6. Under Security, remove root from the User field.\n7. Verify the information and choose Update, then Create.\n8. If your task definition is used in a service, update your service with the updated task definition.\n9. Deactivate previous task definition." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(22,22)""```","Alibaba Cloud Security group allow internet traffic to SSH port (22) This policy identifies Security groups that allow inbound traffic on SSH port (22) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 22, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'minimumPasswordLength does not exist or minimumPasswordLength less than 14'```,"Alibaba Cloud RAM password policy does not have a minimum of 14 characters This policy identifies Alibaba Cloud accounts that do not have a minimum of 14 characters in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Length' field, enter 14 as the minimum number of characters for password complexity.\n6. Click on 'OK'\n7. Click on 'Close'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size equals 0 and encryption equal ignore case provider_managed```,"IBM Cloud unattached disk is not encrypted with customer managed key This policy identifies IBM Cloud unattached disks (storage volume) which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A disk (boot storage volume) can be encrypted with customer managed keys only at the time of\ncreation. Please delete reported data disk following below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete\n\nBefore deleting a disk, make sure to take snapshot of the disk by attaching it to a virtual\nserver instance and follow below URL to create a snapshot:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy mtmay This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals Dns and properties.pricingTier does not equal Standard)] exists```,"Copy of Azure Microsoft Defender for Cloud set to Off for DNS This policy identifies Azure Microsoft Defender for Cloud which has defender setting for DNS set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. Defender for DNS monitors the queries and detects suspicious activities without the need for any additional agents on your resources. It is highly recommended to enable Azure Defender for DNS. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Expand 'Select Defender plan by resource type'\n7. Select 'On' status for 'DNS' under the column 'Microsoft Defender for'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-redshift-describe-clusters' AND json.rule='encrypted is false'```,"AWS Redshift instances are not encrypted This policy identifies AWS Redshift instances which are not encrypted. These instances should be encrypted for clusters to help protect data at rest which otherwise can result in a data breach. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption on your Redshift cluster follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5900,5900)""```","Alibaba Cloud Security group allow internet traffic to VNC Server port (5900) This policy identifies Security groups that allow inbound traffic on VNC Server port (5900) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5900, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where api.name = 'gcloud-domain-users' AND json.rule = isAdmin is true and isEnrolledIn2Sv is false and archived is false and suspended is false```,"GCP Google Workspace Super Admin not enrolled with 2-step verification This policy identifies Google Workspace Super Admins that do not have 2-Step Verification enabled. Super Admin accounts have access to all features in the Admin console and Admin API. This additional layer of 2SV significantly reduces the risk of unauthorized access, protecting administrative controls and sensitive data from potential breaches. Implementing 2-Step Verification safeguards your entire Google Workspace environment, maintaining robust security and compliance standards. It is recommended to enable 2-Step Verification for all Super Admins as it provides an additional layer of security in case account credentials are compromised. This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Workspace users should be allowed to turn on 2-Step verification (2SV) before enabling 2SV. Follow the steps mentioned below to allow users to turn on 2SV.\n1. Sign in to Workspace Admin Console with an administrator account. \n2. Go to Menu then 'Security' > 'Authentication' > '2-step verification'.\n3. Check the 'Allow users to turn on 2-Step Verification' box.\n4. Select 'Enforcement' as per need.\n5. Click Save.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/a/answer/9176657\n\n\nTo enable 2-Step Verification for GCP Workspace User accounts, follow the steps below.\n1. Open your Google Account.\n2. In the navigation panel, select 'Security'.\n3. Under 'How you sign in to Google', select '2-Step Verification' > 'Get started'.\n4. Follow the on-screen steps.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/accounts/answer/185839." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith ecs:)] exists```,"AWS ECS IAM policy overly permissive to all traffic This policy identifies ECS IAM policies that are overly permissive to all traffic. It is recommended that the ECS should be granted access restrictions so that only authorized users and applications have access to the service. For more details: https://docs.aws.amazon.com/AmazonECS/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_service-with-iam-policy-best-practices This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'ECS' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals READY and Firewall.DeleteProtection is false```,"VenuTestCLi This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_planner_stats or settings.databaseFlags[?any(name contains log_planner_stats and value contains on)] exists)""```","GCP PostgreSQL instance database flag log_planner_stats is not set to off This policy identifies PostgreSQL database instances in which database flag log_planner_stats is not set to off. The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. The log_planner_stats flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query. This can be useful for troubleshooting but may increase the number of logs significantly and have performance overhead. It is recommended to set log_planner_stats off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_planner_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_planner_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and ($.X.filterPattern contains ""eventSource="" or $.X.filterPattern contains ""eventSource ="") and ($.X.filterPattern does not contain ""eventSource!="" and $.X.filterPattern does not contain ""eventSource !="") and $.X.filterPattern contains kms.amazonaws.com and $.X.filterPattern contains DisableKey and $.X.filterPattern contains ScheduleKeyDeletion) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for disabling or scheduled deletion of customer created CMKs This policy identifies the AWS regions which do not have a log metric filter and alarm for disabling or scheduled deletion of customer created CMKs. Data encrypted with disabled or deleted keys will no longer be accessible. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-application-gateway' AND json.rule = ['properties.provisioningState'] equal ignore case Succeeded AND ['properties.httpListeners'][*].['properties.provisioningState'] equal ignore case Succeeded AND ['properties.httpListeners'][*].['properties.protocol'] equal ignore case Https AND ['properties.httpListeners'][*].['properties.sslProfile'].['id'] does not exist```,"Azure Application Gateway listener not secured with SSL profile This policy identifies Azure Application Gateway listeners that are not secured with an SSL profile. An SSL profile provides a secure channel by encrypting the data transferred between the client and the application gateway. Without SSL profiles, the data transferred is vulnerable to interception, posing security risks. This could lead to potential data breaches and compromise sensitive information. As a security best practice, it is recommended to secure all Application Gateway listeners with SSL profiles. This ensures data confidentiality and integrity by encrypting traffic. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Application gateways'.\n2. Select 'Application gateways'.\n3. Click on reported Application gateway.\n4. Under 'Settings' select 'Listeners' from the left-side menu.\n5. Select the HTTPS listener.\n6. Check the 'Enable SSL Profile' box.\n7. Select the SSL profile you created (e.g., applicationGatewaySSLProfile) from the dropdown. If no profile exists, you'll need to create one first.\n8. Finish configuring the listener as needed.\n9. Click 'Add' to save the listener with the SSL profile.." ```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false and resourceRecordSet[?any( type equals CNAME and resourceRecords[*].value contains elasticbeanstalk.com)] exists as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' as Y; filter 'not (X.resourceRecordSet[*].resourceRecords[*].value intersects $.Y.cname)'; show X;```,"AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk associated with AWS Elastic Beanstalk Instance This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing Elastic Beanstalk (EBS) will have a risk of these dangling domain entries being taken over by an attacker by creating a similar Elastic beanstalk (EBS) in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones. Note: Please ignore the reported alert if the Elastic Beanstalk (EBS) configured in the Route53 Hosted Zone DNS record are in different accounts. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['RESOURCE_HIJACKING']. Mitigation of this issue can be done as follows: Identify DNS record entry pointing to a non-existing Elastic Beanstalk (EBS) resource.\n\nTo remove DNS record entry, follow steps given in following URL:\nhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudwatch-log-group' AND json.rule = retentionInDays exists and retentionInDays less than 365```,"AWS CloudWatch log groups retention set to less than 365 days This policy identifies the AWS CloudWatch LogGroups having a retention period set to less than 365 days. CloudWatch Logs centralize and store logs from AWS services and systems. 1-year retention of the logs aids in compliance with log retention standards. Shorter retention periods can lead to the loss of historical logs needed for audits, forensic analysis, and compliance, increasing the risk of undetected issues or non-compliance. It is recommended that AWS CloudWatch log group retention be set to at least 365 days to meet compliance needs and support audits, investigations, and analysis. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the log retention setting, perform the following actions:\n\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'CloudWatch Dashboard' by selecting 'CloudWatch' under the 'Management & Governance' in All services\n4. In the navigation pane, choose 'Log groups' under the 'Logs' section\n5. Select the log group that is reported and select 'Edit retention setting(s)' under the 'Actions' drop-down\n6. In 'Retention setting', for 'Expire events after', choose a log retention value either 'Never expire' or the value more than 365 days according to your business requirements\n7. Choose 'Save'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( action equals allow and direction equals outbound and destination equals 0.0.0.0/0 )] exists```,"IBM Cloud ACL for VPC with overly permissive egress rule This policy identifies IBM Cloud VPC Access Control List which are having overly permissive outbound rules allowing outgoing traffic to internet (0.0.0.0/0). ACL contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure ACL to restrict traffic to known destination on authorised protocols and ports. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access control lists'\n3. Select the Access control list reported in the alert\n4. Go to 'Outbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Destination Type' as 'Any' or 'IP or CIDR' as '0.0.0.0/0'\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'```,"AWS Access logging not enabled on S3 buckets Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-enforcement-policy' AND json.rule = isEnabled is false```,"Azure Active Directory Security Defaults is disabled This policy identifies Azure Active Directory which have Security Defaults configuration disabled. Security Defaults contains preconfigured security settings for common identity-related attacks. This provides a basic level of security-enabled by default. It is recommended to enable this configuration as a security best practice. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Active Directory'\n3. Select 'Properties' under 'Manage'\n4. Click on 'Manage Security defaults' if not selected\n5. Under 'Enable Security defaults' select 'Yes' for 'Enable Security defaults'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded AND properties.publicNetworkAccess equal ignore case Enabled AND properties.virtualNetworkRules[*] is empty```,"Azure Cosmos DB Virtual network is not configured This policy identifies Azure Cosmos DBs that are not configured with a Virtual network. Azure Cosmos DB by default is accessible from any source if the request is accompanied by a valid authorization token. By configuring Virtual network only requests originating from those subnets will get a valid response. It is recommended to configure Virtual network to Cosmos DB. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following URL to configure Virtual networks on your Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint." "```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = profile does not equal RESTRICTED and profile does not equal CUSTOM as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter "" $.X.selfLink contains $.Y.sslPolicy ""; show Y;```","GCP HTTPS Load balancer SSL Policy not using restrictive profile This policy identifies HTTPS Load balancers which are not using restrictive profile in it's SSL Policy, which controls sets of features used in negotiating SSL with clients. As a best security practice, use RESTRICTED as SSL policy profile as it meets stricter compliance requirements and does not include any out-of-date SSL features. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select SSL policy which uses the RESTRICTED/CUSTOM profile or if no SSL policy is already present then create a new SSL policy with RESTRICTED as Profile.\nNOTE: If you choose CUSTOM as profile then make sure you are using profile features equally restrictive as the RESTRICTED profile or more than the RESTRICTED profile.\n11. Click on 'Done'\n12. Click on 'Update'." "```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-monitor-log-profiles-list' as Y; filter '($.X.properties.encryption.keySource does not equal ""Microsoft.Keyvault"" and $.X.properties.encryption.keyvaultproperties.keyname is not empty and $.X.properties.encryption.keyvaultproperties.keyversion is not empty and $.X.properties.encryption.keyvaultproperties.keyvaulturi is not empty and $.Y.properties.storageAccountId contains $.X.name)'; show X;```","Azure Storage Account Container with activity log has BYOK encryption disabled This policy identifies the Storage Accounts in which container with activity log has BYOK encryption disabled. Azure storage account with the activity logs being exported to container should use BYOK (Use Your Own Key) for encryption, which provides additional confidentiality controls on log data. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'." ```config from cloud.resource where resource.status = Deleted and api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists```,"test-resource-status This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind contains functionapp and kind does not contain workflowapp and kind does not equal app and properties.state equal ignore case running and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist)) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists'```,"Azure Function app configured with public network access This policy identifies Azure Function apps that are configured with public network access. Publicly accessible web apps could allow malicious actors to remotely exploit any vulnerabilities and could. It is recommended to configure the Function apps with private endpoints so that the functions hosted are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict App Service access, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions." ```config from cloud.resource where api.name = 'aws-iam-service-last-accessed-details' AND json.rule = '(arn contains :role or arn contains :user) and serviceLastAccesses[?any(serviceNamespace contains cloudtrail and totalAuthenticatedEntities any equal 0)] exists' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = 'isAttached is true and (document.Statement[?any(Effect equals Allow and (Action[*] contains DeleteTrail or Action contains DeleteTrail or Action contains cloudtrail:* or Action[*] contains cloudtrail:*))] exists)' as Y; filter '($.Y.entities.policyRoles[*].roleName exists and $.X.arn contains $.Y.entities.policyRoles[*].roleName) or ($.Y.entities.policyUsers[*].userName exists and $.X.arn contains $.Y.entities.policyUsers[*].userName)'; show X;```,"AWS IAM role/user with unused CloudTrail delete or full permission This policy identifies IAM roles/users that have unused CloudTrail delete permission or CloudTrail full permissions. As a security best practice, it is recommended to grant the least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions to a particular role/user. It helps to reduce the potential improper or unintended access to your critical CloudTrail infrastructure. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: If Roles with unused CloudTrail delete permission,\n1. Log in to AWS console\n2. Navigate IAM service\n3. Click on Roles\n4. Click on reported IAM role\n5. In the Permissions tab, Under the 'Permissions policies' section, Remove the policies which have CloudTrail permissions or Delete role if is not required.\n\nIf Users with unused CloudTrail delete permission,\n1. Log in to AWS console\n2. Navigate IAM service\n3. Click on Users\n4. Click on reported IAM user\n5. In the Permissions tab, Under the 'Permissions policies' section, Remove the policies which have CloudTrail permissions or Delete user if is not required.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = sku.tier equals ""Premium"" and properties.status equals ""Active"" and networkRuleSets[*].properties.defaultAction equals ""Allow"" and networkRuleSets[*].properties.publicNetworkAccess equals Enabled```","Azure Service bus namespace configured with overly permissive network access This policy identifies Azure Service bus namespaces configured with overly permissive network access. By default, Service Bus namespaces are accessible from the internet as long as the request comes with valid authentication and authorization. With an IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. With Virtual Networks, the network traffic path is secured on both ends. It is recommended to configure the Service bus namespace with an IP firewall or by Virtual Network; so that the Service bus namespace is accessible only to restricted entities. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict Service bus namespace access to only a set of IPv4 addresses or IPv4 address ranges; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-ip-filtering\n\nTo restrict Service bus namespace access with a virtual network; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-service-endpoints." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-roles' AND json.rule = role.assumeRolePolicyDocument.Statement[*].Action contains ""sts:AssumeRoleWithWebIdentity"" and role.assumeRolePolicyDocument.Statement[*].Principal.Federated contains ""cognito-identity.amazonaws.com"" and role.assumeRolePolicyDocument.Statement[*].Effect contains ""Allow"" and role.assumeRolePolicyDocument.Statement[*].Condition.StringEquals does not contain ""cognito-identity.amazonaws.com:aud""```","AWS Cognito service role does not have identity pool verification This policy identifies the AWS Cognito service role that does not have identity pool verification. AWS Cognito is an identity and access management service for web and mobile apps. AWS Cognito service roles define permissions for AWS services accessing resources. The 'aud' claim in a cognito service role is an identity pool token that specifies the intended audience for the token. If the aud claim is not enforced in the cognito service role trust policy, it could potentially allow tokens issued for one audience to be used to access resources intended for a different audience. This oversight increases the risk of unauthorized access, compromising access controls and elevating the potential for data breaches within the AWS environment. It is recommended to implement proper validation of the 'aud' claim by adding the 'aud' in the Cognito service role trust policy. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNAUTHORIZED_ACCESS']. Mitigation of this issue can be done as follows: To mitigate the absence of 'aud' claim validation in service roles associated with Cognito identity pools, follow these steps:\n\n1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.\n2. In the navigation pane of the IAM console, choose 'Roles'.\n3. In the list of roles in account, choose the name of the role that is reported.\n4. Choose the 'Trust relationships' tab, and then choose 'Edit trust policy'.\n5. Edit the trust policy, add a condition to verify that the 'aud' claim matches the expected identity pool.\n6. Click 'Update Policy'.\n\nRefer to the below link to add the required aud validation in service roles\nhttps://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html#creating-roles-for-role-mapping." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = metadata.items[?any(key contains ""serial-port-enable"" and value contains ""true"")] exists and (status equals RUNNING and name does not start with ""gke-"")```","GCP VM instances have serial port access enabled This policy identifies VM instances which have serial port access enabled. Interacting with a serial port is often referred to as the serial console. The interactive serial console does not support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. So it is recommended to keep interactive serial console support disabled. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit\n6. Under Remote access section, Uncheck 'Enable connecting to serial ports'\n7. Click on Save button." "```config from cloud.resource where api.name = 'azure-cognitive-services-account-diagnostic-settings' AND json.rule = (properties.logs[?any(enabled equal ignore case ""true"")] exists or properties.metrics[?any( enabled equal ignore case ""true"" )] exists) and properties.storageAccountId exists as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = 'totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)' as Y; filter '$.X.properties.storageAccountId contains $.Y.id'; show Y;```","Azure Storage Account storing Cognitive service diagnostic logs is publicly accessible This policy identifies Azure Storage Accounts storing Cognitive service diagnostic logs are publicly accessible. Azure Storage account stores Cognitive service diagnostic logs which might contain detailed information of platform logs, resource logs, trace logs and metrics. Diagnostic log data may contain sensitive data and helps in identifying potentially malicious activity. The attacker could exploit publicly accessible storage account to get cognitive diagnostic data logs and could breach into the system by leveraging exposed data and propagate across your system. As a best security practice, it is recommended to restrict storage account access to only the services as per business requirement. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage Accounts' dashboard\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the blob container you need to modify\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = instanceOptions.areLegacyImdsEndpointsDisabled is false```,"OCI Compute Instance has Legacy MetaData service endpoint enabled This policy identifies the OCI Compute Instances that are configured with Legacy MetaData service (IMDSv1) endpoints enabled. It is recommended that Compute Instances should be configured with legacy v1 endpoints (Instance Metadata Service v1) being disabled, and use Instance Metadata Service v2 instead following security best practices. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. In the Instance Details section, next to Instance Metadata Service, click Edit.\n5. For the Allowed IMDS version, select the Version 2 only option.\n6. Click Save Changes.\n\nNote : \nIf you disable IMDSv1 on an instance that does not support IMDSv2, you might not be able to connect to the instance when you launch it. To re enable IMDSv1: using the Console, on the Instance Details page, next to Instance Metadata Service, click Edit. Select the Version 1 and version 2 option, save your changes, and then restart the instance. Using the API, use the UpdateInstance operation.\n\nFMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/gettingmetadata.htm#upgrading-v2." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dax-cluster' AND json.rule = Status equals ""available"" and SSEDescription.Status equals ""DISABLED""```","AWS DAX cluster not configured with encryption at rest This policy identifies the AWS DAX cluster where encryption at rest is disabled. AWS DAX cluster encryption at rest provides an additional layer of data protection, helping secure your data from unauthorized access to underlying storage. Without encryption, anyone with access to the storage media could potentially intercept and view the data. It is recommended to enable encryption at rest for the AWS DAX cluster. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable DAX encryption at rest while creating the new DynamoDB cluster, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'DynamoDB' service under the 'Database' section in 'Services' menu\n4. In the navigation pane on the left side of the console, under 'DAX', choose 'Clusters'\n5. Choose 'Create cluster'\n6. For Cluster name , and other configurations set according to your reported DAX cluster\n7. On the 'Configure security' panel, In 'Encryption' section, select the checkbox 'Turn on encryption at rest' and Click 'Next'\n8. On the 'Verify advanced settings' set according your reported DAX cluster and click 'Next'\n9. On the 'Review and create' click 'Create cluster'\n\nOnce the new cluster is created, change the cluster endpoint within your DynamoDB application to reference the new resource.\n\nTo delete the existing DAX cluster where encryption not enabled\n\n1. Sign in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'DynamoDB' service under the 'Database' section in 'Services' menu\n4. In the navigation pane on the left side of the console, under 'DAX', choose Clusters\n5. Select the DAX cluster that is reported and required to remove\n6. Click 'Delete' to delete the cluster." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (elasticsearchClusterConfig.zoneAwarenessEnabled is false or elasticsearchClusterConfig.zoneAwarenessEnabled does not exist)'```,"AWS Elasticsearch domain has Zone Awareness set to disabled This policy identifies Elasticsearch domains for which Zone Awareness is disabled in your AWS account. Enabling Zone Awareness (cross-zone replication) increases the availability by distributing your Elasticsearch data nodes across two availability zones available in the same AWS region. It also prevents data loss and minimizes downtime in the event of node or availability zone failure. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Zone Awareness feature on existing Elasticsearch, following CLI can be used:\naws es update-elasticsearch-domain-config --domain-name --region --elasticsearch-cluster-config ZoneAwarenessEnabled=true, ZoneAwarenessConfig={AvailabilityZoneCount=}\n\nFor more refer:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-multiaz.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-private-endpoint' AND json.rule = properties.privateLinkServiceConnections[*].properties.privateLinkServiceId is not empty and properties.privateLinkServiceConnections[*].properties.privateLinkServiceId contains id```,"Test-Uilian This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] contains microsoft-user-default-legacy```,"Azure AD Users can consent to apps accessing company data on their behalf is enabled This policy identifies Azure Active Directory which have 'Users can consent to apps accessing company data on their behalf' configuration enabled. User profiles contain private information which could be shared with others without requiring any further consent from the user if this configuration is enabled. It is recommended not to allow users to use their identity outside of the cloud environment. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: To configure user consent to apps accessing company data on their behalf, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent?pivots=portal." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = 'user does not end with @yourcompanydomainname and user does not end with gserviceaccount.com'```,"CUSTOMIZE: Non-corporate accounts have access to Google Cloud Platform (GCP) resources Using personal accounts to access GCP resources may compromise the security of your business. Using fully managed corporate Google accounts to access Google Cloud Platform resources is recommended to make sure that your resources are secure. NOTE : This policy requires customization before using it. To customize, follow the steps mentioned below: - Clone this policy and replace '@yourcompanydomainname' in RQL with your domain name. For example: 'user does not end with @prismacloud.io and user does not end with gserviceaccount.com'. - For multiple domains, update the RQL with conditions for each domain. For example: 'user does not end with @prismacloud.io and user does not end with @prismacloud.com and user does not end with gserviceaccount.com'. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['USER_ANOMALY']. Mitigation of this issue can be done as follows: It is recommended to use fully managed corporate Google accounts for increased visibility, auditing, and control over access to Google Cloud Platform resources. Do not access GCP resources through personal accounts.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isPhpVersionLatest exists and config.isPhpVersionLatest equals false'```,"Azure App Service Web app doesn't use latest PHP version This policy identifies App Service Web apps that are not configured with latest PHP version. Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. It is recommended to use the latest PHP version for web apps in order to take advantage of security fixes, if any. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to PHP and Minor version is set to latest version.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case application and operating_status equal ignore case online and pools[?any( health_monitor.type does not equal ignore case https )] exists```,"IBM Cloud Application Load Balancer for VPC has backend pool with health check protocol not configured with HTTPS This policy identifies IBM Cloud Application Load Balancers for VPC that has different health check protocol instead of HTTPS. HTTPS pools uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS backend pools for additional security. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancers' reported in the alert\n4. Under ‘Back-end pools' tab, click on three dots on the right corner of a row containing back-end pool with health check protocol other than HTTPS. Then click on 'Edit’\n5. In the 'Edit back-end pool' screen, under 'Health check' section, select 'HTTPS' from the 'Health protocol' dropdown.\n6. Click on 'Save'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""cloud-object-storage"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and (attributes[?any( name is member of (""resource"",""resourceGroupId"",""serviceInstance"",""prefix""))] does not exist or attributes[?any( name equal ignore case ""resourceType"" and value equal ignore case ""bucket"" )] exists ) )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Cloud object storage buckets This policy identifies IBM Cloud users with overly permissive administrative role on IBM Cloud cloud object storage service. IBM Cloud Object Storage is a highly scalable, resilient, and secure managed data storage service on the IBM Cloud platform that offers an alternative to traditional block and file storage solutions. When a user having a policy with admin rights on object storage gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and then click on 'Users' in the left panel.\n3. Select the user for whom you want to edit access.\n4. Go to the 'Access' tab, and under the 'Access policies' section, click on the three dots on the right corner of a row for the policy that has administrator permission on the 'IBM Cloud Object Storage' service.\n5. Click on Remove or Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to edit or remove, and confirm by clicking Save or Remove.." ```config from cloud.resource where cloud.account = 'Aws_sand_2743_Dipankar_Again' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus equals SUCCESS and recordingGroup.allSupported is true' as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus equals opted-in or optInStatus equals opt-in-not-required as Y; filter '$.X.region equals $.Y.regionName'; show X; count(X) less than 1```,"NSK test AWS config recorder test This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.account = 'Bikram-Personal-AWS Account' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","bikram-test-public-s3-bucket bikram-test-public-s3-bucket This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 1433 or fromPort == 1433) or (toPort > 1433 and fromPort < 1433)))] exists)```,"Copy of AWS Security Group allows all traffic on SSH port (22) This policy identifies Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 (or range containing 22)." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dynamodb-describe-table' AND json.rule = 'ssedescription does not exist or (ssedescription exists and ssedescription.ssetype == AES256)'```,"AWS DynamoDB encrypted using AWS owned CMK instead of AWS managed CMK This policy identifies the DynamoDB tables that use AWS owned CMK (default ) instead of AWS managed CMK (KMS ) to encrypt data. AWS managed CMK provide additional features such as the ability to view the CMK and key policy, and audit the encryption and decryption of DynamoDB tables. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'DynamoDB' dashboard\n4. Select the reported table from the list of DynamoDB tables\n5. In 'Overview' tab, go to 'Table Details' section\n6. Click on the 'Manage Encryption' link available for 'Encryption Type'\n7. On 'Manage Encryption' pop up window, Select 'KMS' as the encryption type.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals VirtualMachines and properties.pricingTier equal ignore case Standard and properties.subPlan equal ignore case P2)] does not exist or pricings[?any(name equals Dns and properties.deprecated is false and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud set to Off for DNS This policy identifies Azure Microsoft Defender for Cloud which has a defender setting for DNS set to Off. Enabling Azure Defender for the cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behavior analytics. Defender for DNS monitors the queries and detects suspicious activities without the need for any additional agents on your resources. It is highly recommended to enable Azure Defender for DNS. Note: This policy does check for classic Defender for DNS configuration. If Defender for Servers Plan 2 is enabled, the defender setting for DNS will be set by default. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: For customers who are using Microsoft Defender for Servers Plan 2:\n\n1. Go to Microsoft Defender for Cloud\n2. Select Environment Settings\n3. Click on the subscription name\n4. Select the Defender plans\n5. Ensure Status is set to On for Servers Plan 2\n\nFor customers who are using Microsoft Defender for Servers Plan 1:\n\n1. Go to Microsoft Defender for Cloud\n2. Select Environment Settings\n3. Click on the subscription name\n4. Select the Defender plans\n5. Ensure Status is set to On for DNS.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverAdmins does not exist or serverAdmins[*] size equals 0 or (serverAdmins[*].properties.administratorType exists and serverAdmins[*].properties.administratorType does not equal ActiveDirectory and serverAdmins[*].properties.login is not empty)```,"Azure SQL server not configured with Active Directory admin authentication This policy identifies Azure SQL servers that are not configured with Active Directory admin authentication. Azure Active Directory authentication is a mechanism of connecting to Microsoft Azure SQL Database and SQL Data Warehouse by using identities in Azure Active Directory (Azure AD). With Azure AD authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. It is recommended to configure SQL servers with Active Directory admin authentication. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate SQL servers dashboard\n3. Select reported each SQL server\n4. Click on Azure Active Directory (under 'Settings')\n5. Click on 'Set admin'\n6. Select an Azure Active Directory from available options\n7. Click on Select\n8. Click on Save." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and authEnabled is false```,"GCP Memorystore for Redis instance has AUTH disabled This policy identifies GCP Memorystore for Redis instances having AUTH disabled. GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. When AUTH is disabled, any client that can reach the Redis instance over the network can freely connect and perform operations without providing any credentials, creating a significant security risk to your data. It is recommended to enable authentication (AUTH) on the GCP Memorystore for Redis to ensure only authorized clients can connect. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Under 'Instances', click on the reported instance.\n3. Select 'EDIT' on the top navigation bar\n4. Under 'Edit Redis instance' page, under 'Security', select the 'Enable AUTH' checkbox\n5. Click on 'SAVE'.." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' AND json.rule = status.state does not contain TERMINATING as X; config from cloud.resource where api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 8088 or fromPort == 8088) or (toPort > 8088 and fromPort < 8088)))] exists) as Y; filter '$.X.ec2InstanceAttributes.emrManagedMasterSecurityGroup equals $.Y.groupId or $.X.ec2InstanceAttributes.additionalMasterSecurityGroups[*] contains $.Y.groupId'; show X;```,"AWS EMR cluster Master Security Group allows all traffic to port 8088 This policy identifies AWS EMR cluster which has Master Security Group which allows all traffic to port 8088. Exposing port 8088 to all traffic exposes web interfaces of the master node of an EMR Cluster. This configuration is highly susceptible to EMR cluster hijacking attacks. It is highly recommended limiting the access for the EMR cluster attached Master Security Group to your IP only or configure SSH Tunnel. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services.\n\n1. Log in to the AWS Console\n2. Select Clusters in left side pane\n3. Select the EMR Cluster reported in the alert\n4. Select the Security groups for Master link under Security and access\n5. Choose ElasticMapReduce-master from the list \n6. Click on the 'Inbound Rule'\n7. Delete the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 8088 (or range containing 8088)." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-rds-describe-db-parameter-groups' AND json.rule = (((dbparameterGroupFamily starts with ""postgres"" or dbparameterGroupFamily contains ""sqlserver"") and (['parameters'].['rds.force_ssl'].['parameterValue'] does not equal 1 or ['parameters'].['rds.force_ssl'].['parameterValue'] does not exist)) or ((dbparameterGroupFamily starts with ""mariadb"" or dbparameterGroupFamily starts with ""mysql"") and (parameters.require_secure_transport.parameterValue does not equal 1 or parameters.require_secure_transport.parameterValue does not exist)) or (dbparameterGroupFamily contains ""db2-ae"" and (parameters.db2comm.parameterValue does not equal ignore case ""SSL"" or parameters.db2comm.parameterValue does not exist))) as Y; filter '$.X.dbparameterGroups[*].dbparameterGroupArn equals $.Y.dbparameterGroupArn' ; show X;```","AWS RDS database instance not configured with encryption in transit This policy identifies AWS RDS database instances that are not configured with encryption in transit. This covers MySQL, SQL Server, PostgreSQL, MariaDB, and DB2 RDS instances. Enabling encryption is crucial to protect data as it moves through the network and enhances the security between clients and storage servers. Without encryption, sensitive data transmitted between your application and the database is vulnerable to interception by malicious actors. This could lead to unauthorized access, data breaches, and potential compromises of confidential information. It is recommended that data be encrypted while in transit to ensure its security and reduce the risk of unauthorized access or data breaches. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the in-transit encryption feature for your Amazon RDS databases, perform the following actions:\n\nDefault parameter groups for RDS DB instances cannot be modified. Therefore, you must create a custom parameter group, modify it, and then attach it to your RDS for DB instances. Changes to parameters in a customer-created DB parameter group are applied to all DB instances that are associated with the DB parameter group.\n\nFollow the below links to create and associate a DB parameter group with a DB instance,\n\nTo Create a DB parameter group, refer to the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating\n\nTo Associating a DB parameter group with a DB instance, refer the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating\n\nTo Modifying parameters in a DB parameter group,\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.\n2. In the navigation pane, choose 'Parameter Groups'.\n3. In the list, choose the parameter group that is associated with the RDS instance.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the values of the parameters that you want to modify. You can scroll through the parameters using the arrow keys at the top right of the dialog box.\n6. In the 'Modifiable parameters' section, enter 'rds.force_ssl' in the Filter Parameters search box for SQL Server and PostgreSQL databases, and type 'require_secure_transport' in the search box for MySQL and MariaDB databases and type DB2COMM for DB2 databases.\n a. For the 'rds.force_ssl' database parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature. \n or\n b. For the 'require_secure_transport' parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature.\n or\n c. For the 'DB2COMM' parameter, enter 'SSL' in the Value box based on the allowed values to enable Transport Encryption.\n7. Choose Save changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-get-stages' AND json.rule = webAclArn does not exist or webAclArn does not start with arn:aws:wafv2```,"AWS API Gateway REST API not configured with AWS Web Application Firewall v2 (AWS WAFv2) This policy identifies AWS API Gateway REST API which is not configured with AWS Web Application Firewall. As a best practice, enable the AWS WAF service on API Gateway REST API to protect against application layer attacks. To block malicious requests to your API Gateway REST API, define the block criteria in the WAF web access control list (web ACL). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Make sure your the reported API Gateway REST API requires WAF based on your requirement and Note down the API Gateway REST API name\n\nFollow steps given in below URL to associate API Gateway REST API to WAF Web ACL ,\nhttps://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals READY and Firewall.DeleteProtection is false```,"AWS Network Firewall delete protection is disabled This policy identifies the AWS Network Firewall for which delete protection is disabled. AWS Network Firewall manages inbound and outbound traffic for the AWS resources within Virtual Private Clouds (VPCs). The deletion protection setting protects against accidental deletion of the firewall. Deletion of a firewall increases the risk of unauthorized access, data breaches, and compliance issues. It is recommended to enable deletion protection for a network firewall to safeguard against accidental deletion. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable delete protection on an AWS Network Firewall, perform the following actions:\n\n1. Log into the AWS console\n2. Select the specific region from the drop-down in the top right corner for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, Under 'Network Firewall', choose 'Firewalls'\n5. On the Firewalls page, select the reported firewall\n6. In the 'Firewall details' tab, under the 'Change protections' section, click on 'Edit'\n7. In the pop-up window, choose the 'Enable' checkbox under the 'Delete protection' option\n8. Click on 'Save' to save the changes." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-subnets-list' AND json.rule = purpose does not contain INTERNAL_HTTPS_LOAD_BALANCER and purpose does not contain REGIONAL_MANAGED_PROXY and purpose does not contain GLOBAL_MANAGED_PROXY and purpose does not contain PRIVATE_SERVICE_CONNECT and (enableFlowLogs is false or enableFlowLogs does not exist)```,"GCP VPC Flow logs for the subnet is set to Off This policy identifies the subnets in VPC Network which have Flow logs disabled. Flow logs enable the capturing of information about the IP traffic going to and from network interfaces in VPC Subnets. It is recommended to enable the flow logs which can be used for network monitoring, forensics, real-time security analysis. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Goto VPC Network (on Left Panel)\n3. Select the reported VPC network and then click on the alerted subnet\n4. On 'Subnet details' page, click on 'EDIT'\n5. Set 'Flow Logs' to value 'On'\n6. Click on 'SAVE'\nFor more information, refer : https://cloud.google.com/vpc/docs/using-flow-logs#enable-subnet." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireLowercaseCharacters is false or requireLowercaseCharacters does not exist'```,"AWS IAM password policy does not have a lowercase character Checks to ensure that IAM password policy requires a lowercase character. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one lowercase letter'.\n4. Click on 'Apply password policy'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status equals ISSUED and (_DateTime.ageInDays($.notAfter) > -31)'```,"AWS Certificate Manager (ACM) has certificates expiring in 30 days or less This policy identifies ACM certificates expiring in 30 days or less, which are in the AWS Certificate Manager. If SSL/TLS certificates are not renewed prior to their expiration date, they will become invalid and the communication between the client and the AWS resource that implements the certificates is no longer secure. As a best practice, it is recommended to renew certificates before their validity period ends. AWS Certificate Manager automatically renews certificates issued by the service that is used with other AWS resources. However, the ACM service does not renew automatically certificates that are not in use or not associated anymore with other AWS resources. So the renewal process must be done manually before these certificates become invalid. NOTE: If you wanted to be notified other than before or less than 30 days; you can clone this policy and replace '30' in RQL with your desired days value. For example, 15 days OR 7 days which will alert certificates expiring in 15 days or less OR 7 days or less respectively. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to the Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Verify that the 'Status' column shows 'Issued' for the reported certificate\n6. Under 'Actions' drop-down select 'Reimport certificate' option\n7. On the Import a certificate page, perform the following actions:\n7a. In 'Certificate body*' box, paste the PEM-encoded certificate to import, purchased from your SSL certificate provider.\n7b. In 'Certificate private key*' box, paste the PEM-encoded, unencrypted private key that matches the SSL/TLS certificate public key.\n7c.(Optional) In 'Certificate chain' box, paste the PEM-encoded certificate chain delivered with the certificate body specified at step 7a.\n8. Click on 'Review and import' button\n9. On the Review and import page, review the imported certificate details then click on 'Import'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and properties.containers[*].properties.environmentVariables[*] exists and properties.containers[*].properties.environmentVariables[*].value exists```,"Azure Container Instance environment variable with regular value type This policy identifies Azure Container Instances (ACI) in which the environment variables with regular value type instead of the secure values property. Objects with secure values are intended to hold sensitive information like passwords or keys for your application. Using secure values for environment variables is both safer and more flexible than including them in your container's image. So it is recommended to secure the environment variable by specifying the 'secureValue' property instead of the regular 'value' for the variable's type. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Environment variables can only be configured with secure values at the time of container instance creation. It is not possible to modify environment variables once instance is created. Hence, it is suggested to delete an existing container instance having not configured with secure values and create a new container instance having required environment variables configured with secure values.\nNote: Backup or migrate data from the container instance before deleting it.\n\nTo create a container instance with environment variables with secure value property; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables#secure-values\n\nTo delete a reported container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal#clean-up-resources." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] does not contain ""ManagePermissionGrantsForSelf.microsoft-user-default-low""```","Azure Microsoft Entra ID users can consent to apps accessing company data on their behalf not set to verified publishers This policy identifies instances in the Microsoft Entra ID configuration where users in your Azure Microsoft Entra ID (formerly Azure Active Directory) can consent to applications accessing company data on their behalf, even if the applications are not from verified publishers. Allowing unverified applications to access company data increases the likelihood of data breaches and unauthorized access, which could lead to the exposure of confidential information. Using unverified applications can lead to non-compliance with data protection regulations and undermine trust in the organization's data handling practices. As a best practice, it is recommended to configure the user consent settings to restrict access only to applications from verified publishers. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Select 'Enterprise Applications'\n4. Select 'Consent and permissions'\n5. Select 'User consent settings'\n6. Under User consent for applications, select 'Allow user consent for apps from verified publishers, for selected permissions (Recommended)'\n7. Select Save." "```config from cloud.resource where api.name = 'aws-ecs-service' AND json.rule = networkConfiguration.awsvpcConfiguration.assignPublicIp exists and networkConfiguration.awsvpcConfiguration.assignPublicIp equal ignore case ""ENABLED""```","AWS ECS services have automatic public IP address assignment enabled This policy identifies whether Amazon ECS services are configured to assign public IP addresses automatically. Assigning public IP addresses to ECS services may expose them to the internet. If the services are not adequately secured or have vulnerabilities, they could be susceptible to unauthorized access, DDoS attacks, or other malicious activities. It is recommended that the Amazon ECS environment not have an associated public IP address except for limited edge cases. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify a disable auto-assign public IP for an ECS Service:\n\n1. Use the AWS CLI console or AWS API, as you cannot update network configurations for an ECS Service using the AWS Management Console.\n\n2. Run update-service command in AWS CLI to disable auto-assign public IP for an ECS Service\n aws ecs update-service --cluster --service --network-configuration ""awsvpcConfiguration={subnets=[string, string],securityGroups=[string, string],assignPublicIp=DISABLED}""\nPlease Refer to the below URL:\nhttps://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_validate_compliance_hyperion_policy_ss_finding_2 Description-0b771ac4-26e0-4857-8391-b8e39e24555b This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'privateClusterConfig.enablePrivateNodes does not exist or privateClusterConfig.enablePrivateNodes is false'```,"GCP Kubernetes Engine Clusters not configured with private nodes feature This policy identifies Google Kubernetes Engine (GKE) Clusters which are not configured with the private nodes feature. Private nodes feature makes your master inaccessible from the public internet and nodes do not have public IP addresses, so your workloads run in an environment that is isolated from the internet. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes private node feature can be enabled at the time of cluster creation. So to fix this alert, Create a new cluster with a private node feature enabled on it and migrate all required data from reported cluster to the newly created cluster and delete reported Kubernetes engine cluster.\n\nTo create new Kubernetes engine cluster with private node feature enabled, perform the following: \n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. Click on CREATE CLUSTER button\n5. Click on 'Advanced options'\n6. Under the Networking section, Check the 'Enable VPC-native (using alias IP)' option\n7. Choose the required Network, Node subnet parameters\n8. From Network security, select the Private cluster check box.\n9. To create a master that is accessible from authorized external IP ranges, keep the 'Access master using its external IP address' checkbox selected.\n9. Set 'Master IP range' to as per your required IP range\n10. Click on 'Create'\nNOTE: When you create a private cluster, you must specify a /28 CIDR range for the VMs that run the Kubernetes master components.\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = type equals application and ['attributes'].['routing.http.drop_invalid_header_fields.enabled'] is false```,"AWS Application Load Balancer (ALB) is not configured to drop HTTP headers This policy identifies AWS Application Load Balancers that are not configured to drop HTTP headers. AWS Application Load Balancers distribute incoming HTTP/HTTPS traffic across multiple targets such as EC2 instances, containers, and Lambda functions, based on routing rules and health checks. By default, ALBs are not configured to drop invalid HTTP header values, which can leave the load balancer vulnerable to HTTP desync attacks. HTTP desync attacks manipulate request headers to exploit inconsistencies between servers, potentially leading to security vulnerabilities and unauthorized access. It is recommended to enable this feature, to prevent the load balancer from forwarding requests with invalid HTTP headers to mitigate potential security vulnerabilities. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the application load balancer to drop invalid HTTP header fields, perform the following actions:\n\n1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/\n2. In the navigation pane, choose 'Load balancers'\n3. Choose the reported Application Load Balancer \n4. From 'Actions', choose 'Edit load balancer attributes' \n5. Enable the 'Drop invalid header fields’ option\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-guardduty-detector' AND json.rule = status does not equal ENABLED```,"AWS GuardDuty detector is not enabled This policy identifies the AWS GuardDuty detector that is not enabled in specific regions. GuardDuty identifies potential security threats in the AWS environment by analyzing data collected from various sources. The GuardDuty detector is the entity within the GuardDuty service that does this analysis. Failure to enable GuardDuty increases the risk of undetected threats and vulnerabilities which could lead to compromises in the AWS environment. It is recommended to enable GuardDuty detectors in all regions to reduce the risk of security breaches. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Amazon GuardDuty in the region,\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down menu located at the top right corner for which the alert has been generated.\n3. Navigate to service 'Amazon Gaurdduty' from the 'Services' Menu.\n4. Choose 'Get Started'.\n5. Choose 'Enable GuardDuty' to enable on a specific region.\n\nTo re-enable Amazon GuardDuty after suspending,\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down menu located at the top right corner for which the alert has been generated.\n3. Navigate to service 'Amazon Gaurdduty' from the 'Services' Menu.\n4. In the navigation pane, choose 'Settings'.\n5. Choose 'Re-enable GuardDuty' to re-enable on a specific region.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = 'ramRoleName is empty'```,"Alibaba Cloud ECS instance RAM role not enabled This policy identifies ECS instances for which the Resource Access Management (RAM) role is not enabled. Alibaba Cloud provides RAM roles to securely access Alibaba Cloud services and resources. As a best practice, create RAM roles and attach the role to manage ECS instance permissions securely instead of distributing or sharing keys or passwords. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Select the reported ECS instance\n5. Select More > Instance Settings > Bind/Unbind RAM Role\n6. Select a required RAM Role\nNOTE: If already RAM role is not created create new RAM role and follow the same procedure to attach.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = policy.Statement[?any((Principal equals * or Principal.AWS contains *) and Effect equals Allow and Condition does not exist)] exists```,"AWS Private ECR repository policy is overly permissive This policy identifies AWS Private ECR repositories that have overly permissive registry policies. An ECR(Elastic Container Registry) repository is a collection of Docker images available on the AWS cloud. These images might contain sensitive information which should be restricted to unauthorized users. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'ECR' dashboard from 'Services' dropdown\n4. Go to 'Repository', from the left panel\n5. Select the repository for which alert is being generated\n6. Select the 'Permissions' option from left menu below 'repositores'\n7. Click on 'Edit policy JSON' to modify the JSON so that Principal is restrictive\n8. After modifications, click on 'Save'.." ```config from cloud.resource where api.name = 'azure-app-service-basic-publishing-credentials-policies' AND json.rule = properties.allow is true as X; config from cloud.resource where api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running as Y; filter '$.X.id contains $.Y.id'; show Y;```,"Azure App Service basic authentication enabled This policy identifies Azure App Services which have basic authentication enabled. Basic Authentication allows local identity management for App Services without using a centralized identity provider like Azure Entra ID, posing a security risk by creating isolated identity systems that lack centralized control and are vulnerable to credential compromise and unauthorized access. Disabling Basic Authentication and integrating with a centralized solution like Azure Entra ID enhances security with stronger authentication, improved access management, and reduced attack risks. As a security best practice, it is recommended to disable basic authentication for Azure App Services. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App Service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Under the 'General settings' tab, scroll down to locate the two Basic Auth settings:\n - Set the 'SCM Basic Auth Publishing Credentials' radio button to Off\n - Set the 'FTP Basic Auth Publishing Credentials' radio button to Off\n6. At the top, click on 'Save'\n7. Click 'Continue' to save the changes." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.securityGroupIds[*] size greater than 1```,"AWS EKS cluster control plane assigned multiple security groups Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). This policy checks the number of security groups assigned to your cluster's control plane and alerts if there are more than one. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Create a single dedicated VPC security group for your EKS cluster control plane.\n\nFrom the AWS console a security group cannot be added to, nor removed from, a Kubernetes cluster once it is created. To resolve this alert, create a new cluster with a single dedicated security group as per your requirements, then migrate all required cluster data from the reported cluster to this newly created cluster and delete the reported Kubernetes cluster.\n\n1. Open the Amazon EKS dashboard.\n2. Choose Create cluster.\n3. On the Create cluster page, fill in the following fields:\n\n- Cluster name\n- Kubernetes version\n- Role name\n- VPC\n- Subnets\n- Security Groups - Choose your new dedicated control plane security group.\n- Endpoint private access\n- Endpoint public access\n- Logging\n\n4. Choose Create.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case ""/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace""```","test-p3 This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-subscription-tenantpolicy' AND json.rule = properties.blockSubscriptionsIntoTenant is false or properties.blockSubscriptionsLeavingTenant is false```,"Azure subscription permission for Microsoft Entra tenant is set to 'Allow everyone' This policy identifies Microsoft Entra tenant that are not configured with restrictions for 'Subscription entering Microsoft Entra tenant' and 'Subscription leaving Microsoft Entra tenant'. Users who are set as subscription owners can make administrative changes to the subscriptions and move them into and out of the Microsoft Entra tenant. Allowing subscriptions to enter or leave the Microsoft Entra tenant without restrictions can expose the organization to unauthorized access and potential security breaches. As best practice, it is recommended to configure the settings for 'Subscription entering Microsoft Entra tenant' and 'Subscription leaving Microsoft Entra tenant' to 'Permit no one' to ensure only authorized subscriptions can interact with the tenant, thus enhancing the security of your Azure environment. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure policy settings to control the movement of Azure subscriptions from and into Microsoft Entra tenant follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/cost-management-billing/manage/manage-azure-subscription-policy#setting-subscription-policy." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and privateClusterConfig exists and privateClusterConfig.enablePrivateEndpoint does not exist```,"GCP Kubernetes Engine private cluster has private endpoint disabled This policy identifies GCP Kubernetes Engine private clusters with private endpoint disabled. A public endpoint might expose the current cluster and Kubernetes API version and an attacker may be able to determine whether it is vulnerable to an attack. Unless required, disabling the public endpoint will help prevent such threats, and require the attacker to be on the master's VPC network to perform any attack on the Kubernetes API. It is recommended to enable the private endpoint and disable public access on Kubernetes clusters. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Once a cluster is created without enabling Private Endpoint, it cannot be remediated. Rather, the cluster must be recreated. \nTo create the private cluster with public access disabled, refer to the below link,\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp\n\nTo resolve the alert, ensure deletion of the old cluster after the new private cluster is created and is in running state and once all the data has been migrated.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-parameter' AND json.rule = 'type does not contain SecureString'```,"AWS SSM Parameter is not encrypted This policy identifies the AWS SSM Parameters which are not encrypted. AWS Systems Manager (SSM) parameters that store sensitive data, for example, passwords, database strings, and permit codes are encrypted so as to meet security and compliance prerequisites. An encrypted SSM parameter is any sensitive information that should be kept and referenced in a protected way. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to System Manager\n3. In the navigation panel, Click on 'Parameter Store'\n4. Choose the reported parameter and port it to a new parameter with Type 'SecureString'\n5. Delete the reported parameter by clicking on 'Delete'\n6. Click on 'Delete parameters'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Copy of Copy of Copy of build information This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = description.availabilityZones[*] size less than 2```,"AWS Classic Load Balancer not configured to span multiple Availability Zones This policy identifies AWS Classic Load Balancers that are not configured to span multiple Availability Zones. Classic Load Balancer would not be able to redirect traffic to targets in another Availability Zone if the sole configured Availability Zone becomes unavailable. As best practice, it is recommended to configure Classic Load Balancer to span multiple Availability Zones. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AWS Classic Load Balancer to span multiple Availability Zones follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html#add-availability-zone." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and (identity.type does not exist or (identity.type exists and identity.type equal ignore case None))```,"Azure Container Instance not configured with the managed identity This policy identifies Azure Container Instances (ACI) that are not configured with the managed identity. The managed identity is authenticated with Azure AD, developers don't have to store any credentials in code. So It is recommended to configure managed identity on all your container instances. For more details: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable managed identity on your container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals ""ACTIVE"" and shieldedInstanceConfig.enableIntegrityMonitoring is false```","GCP Vertex AI Workbench user-managed notebook has Integrity monitoring disabled This policy identifies GCP Vertex AI Workbench user-managed notebooks that have Integrity monitoring disabled. Integrity Monitoring continuously monitors the boot integrity, kernel integrity, and persistent data integrity of the underlying VM of the shielded user-managed notebooks. It detects unauthorized modifications or tampering, enhancing security by verifying the trusted state of VM components throughout their lifecycle. It provides active alerting allowing administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. It is recommended to enable integrity monitoring for user-managed notebooks to detect and mitigate advanced threats like rootkits and bootkit malware. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Enable 'Turn on Integrity Monitoring'\n11. Click on 'Save'\n12. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipProtocol equals tcp or ipProtocol equals icmp or ipProtocol equals icmpv6 or ipProtocol equals udp) and (ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0))] exists)```,"Copy of navnon-onboarding-policy navnon-onboarding-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.jitNetworkAccessMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with ""ASC Default""))'```","Azure Microsoft Defender for Cloud JIT network access monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have JIT network access monitoring set to disabled. Enabling JIT Network Access will enhance the protection of VMs by creating a Just in Time VM. The JIT VM with NSG rule will restrict the availability of access to the ports to connect to the VM for a pre-set time and only after checking the Role Based Access Control permissions of the user. This feature will control the brute force attacks on the VMs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Management ports of virtual machines should be protected with just-in-time network access control' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Copy of Copy of build information This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles contains roles/editor or roles contains roles/owner and (user does not start with g-bootstrap-svcacct-terraform and user does not equal ""g-devops-admin@cna.com"" and user does not equal ""g-atos-devsecops@cna.com"" and user does not contain ""iam.gserviceaccount.com"") and (user does not contain ""appspot"" and user does not contain ""cloud"" and user does not contain ""developer"")```","GM-Mukhtar-AyawDaw This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-datastores' AND json.rule = (properties.datastoreType equal ignore case AzureFile or properties.datastoreType equal ignore case AzureBlob) and properties.credentials.credentialsType equal ignore case AccountKey```,"Azure Machine Learning workspace Storage account Datastore using Account key based authentication This policy identifies Azure Machine Learning workspace datastores that use storage account keys for authentication. Account key-based authentication is a security risk because it grants full, unrestricted access to the storage account, including the ability to read, write, and delete all data. If compromised, attackers can control all data in the account. This method lacks permission granularity and time limits, increasing the risk of exposing sensitive information. Using SAS tokens provides more granular control, allowing you to limit access to specific resources and set time-bound access, which enhances security and reduces risks in production environments. As a security best practice, it is recommended to use SAS tokens for authenticating Azure Machine Learning datastores. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported Datastore is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Assets' section, click on the 'Data'\n7. Select the 'Datastores' tab at the top\n8. Click on the reported Datastore\n9. Click on the 'Update authentication' tab at the top\n9. A side panel will appear on the right, configure the 'Authentication type' as 'SAS token' and enter the token value\n10. Click 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = 'isLegacy is true and properties.isCapturingLogsForAllRegions is false'```,"Azure log profile not capturing activity logs for all regions This policy identifies Azure log profiles that are not capturing activity logs for all regions. Activity logs are exported from all the Azure supported regions/locations means that logs for potentially unexpected activities occurring in otherwise unused regions are stored and made available for incident response and investigations. Note: Since this type of logging is not deprecated from the Cloud service provider yet, we support it until it is removed. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Execute the command to check the number of regions present on the account: az account list-locations --query '[*].name' | grep -P 'w+' | wc -l\n2. Execute the command to check the number of regions added to the log profile: az monitor log-profiles list --query '[*].locations' | grep -P 'w+' | wc -l\n3. In case there is a difference in the count of regions from step 1 and step 2, Execute the command to list all regions az account list-locations --query '[*].name'\n4. Use the listed regions from step 3 and Update the Legacy Log profiles activity logs for all regions by following the below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=cli#managing-legacy-log-profiles." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-data-factory-v2' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled```,"Azure Data Factory (V2) configured with overly permissive network access This policy identifies Data factories (V2) configured with overly permissive network access. A Data factory managed virtual network along with managed private endpoints protects against data exfiltration. It is recommended to configure the Data factory with a private endpoint; so that the Data factory is accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Data factories'\n3. Click on the reported Data factory\n4. Select 'Networking' under 'Settings' from left panel \n5. In 'Private endpoint connections' tab, Create a private endpoint as per your requirement.\n6. Once Private endpoint is created; In 'Network access' tab, Select the 'Private endpoint'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service-environment' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.clusterSettings exists and properties.clusterSettings[?any(name equal ignore case FrontEndSSLCipherSuiteOrder)] does not exist```,"Azure App Service Environment configured with weak TLS cipher suites This policy identifies Azure App Service Environments that are configured with weak TLS Cipher suites. Azure App Service Environments host web applications and APIs in a dedicated and isolated environment. When these environments are configured with weak TLS Cipher suites, they can expose sensitive data to potential security risks. Weak cipher suites may allow attackers to intercept and decrypt communication between clients and the App Service Environment, leading to unauthorized access, data breaches, and potential compliance violations. The recommended cipher suites are TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. As best practice, it is recommended to avoid using weak TLS Cipher suites to enhance security and protect sensitive data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the documentation:\nhttps://learn.microsoft.com/en-us/azure/app-service/environment/app-service-app-service-environment-custom-settings." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equal ignore case Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and access equal ignore case Allow and direction equal ignore case Inbound and ((protocol equal ignore case Tcp and (destinationPortRange contains * or destinationPortRange contains _Port.inRange(80,80) or destinationPortRange contains _Port.inRange(443,443) or destinationPortRanges any equal * or destinationPortRanges[*] contains _Port.inRange(80,80) or destinationPortRanges contains _Port.inRange(443,443) )) or (protocol contains * and (destinationPortRange contains _Port.inRange(80,80) or destinationPortRange contains _Port.inRange(443,443) or destinationPortRanges[*] contains _Port.inRange(80,80) or destinationPortRanges contains _Port.inRange(443,443) ))) )] exists```","Azure Network Security Group having Inbound rule overly permissive to HTTP(S) traffic This policy identifies Network Security Groups (NSGs) that have inbound rules allowing overly permissive access to HTTP or HTTPS traffic. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. Overly permissive inbound rules for HTTP(S) traffic increase the risk of unauthorized access and potential attacks on your network resources. This can lead to data breaches, exposure of sensitive information, and other security incidents. As a best practice, it is recommended to configure NSGs to restrict HTTP(S) traffic to only necessary and trusted IP addresses. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-sync-service' AND json.rule = properties.provisioningState equals Succeeded and properties.incomingTrafficPolicy equals AllowAllTraffic```,"Azure Storage Sync Service configured with overly permissive network access This policy identifies Storage Sync Services configured with overly permissive network access. A Storage Sync Service is a management construct that represents registered servers and sync groups. Allowing all traffic to the Sync Service may allow a bad actor to brute force their way into the system and potentially get access to the entire network. With a private endpoint, the network traffic path is secured on both ends and access is restricted to only defined authorized entities. It is recommended to configure the Storage Sync Service with private endpoints to minimize the access vector. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage Sync Services dashboard \n3. Click on the reported Storage Sync Service\n4. Under the 'Settings' menu, click on 'Network'\n5. Under 'Allow access from' select 'Private endpoints only'\n6. Click on 'Private endpoint' and Create a private endpoint with required parameters \n7. Click on 'Save'." ```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-blob-diagnostic-settings' AND json.rule = (properties.logs[?(@.categoryGroup)] exists and properties.logs[*].enabled any true) or (properties.logs[?(@.category)] exists and properties.logs[*].enabled all true) as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```,"Azure Storage account diagnostic setting for blob is disabled This policy identifies Azure Storage account blobs that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account blobs. These logs provide valuable insights into the operations, performance, and security of the storage account blobs. As a best practice, it is recommended to enable diagnostic logs on all storage account blobs. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the blob resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Action anyStartWith * and Resource equals * and Effect equals Allow)] exists and (policyArn exists and policyArn does not contain iam::aws:policy/AdministratorAccess)```,"AWS IAM policy allows full administrative privileges This policy identifies IAM policies with full administrative privileges. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege like granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the IAM dashboard\n3. In the navigation pane, click on Policies and then search for the policy name reported\n4. Select the policy, click on the 'Policy actions', select 'Detach'\n5. Select all Users, Groups, Roles that have this policy attached, Click on 'Detach policy'." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = state.name contains ""stopped"" ```","bikram_test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = cors exists and cors.allowOrigins[*] contains ""*"" and cors.allowMethods[*] contains ""*""```","AWS Lambda function URL having overly permissive cross-origin resource sharing permissions This policy identifies AWS Lambda functions which have overly permissive cross-origin resource sharing (CORS) permissions. Overly permissive CORS settings (allowing wildcards) can potentially expose the Lambda function to unwarranted requests and cross-site scripting attacks. It is highly recommended to specify the exact domains (in 'allowOrigins') and HTTP methods (in 'allowMethods') that should be allowed to interact with your function to ensure a secure setup. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To properly configure CORS permissions, refer the following URL:\nhttps://docs.aws.amazon.com/lambda/latest/dg/API_Cors.html." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals ""0.0.0.0"" and endIpAddress equals ""0.0.0.0"")] exists```","Azure SQL Server allow access to any Azure internal resources This policy identifies SQL Servers that are configured to allow access to any Azure internal resources. Firewall settings with start IP and end IP both with ‘0.0.0.0’ represents access to all Azure internal network. When this settings is enabled, SQL server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n3. Click on the reported SQL server\n4. Click on 'Networking' under Security\n5. Unselect 'Allow Azure services and resources to access this server' under Exceptions if selected.\n6. Remove any firewall rule which allows access to 0.0.0.0 in startIpAddress and endIpAddress if any.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isUppercaseCharactersRequired isFalse'```,"OCI IAM password policy for local (non-federated) users does not have an uppercase character This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have an uppercase character in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page:https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 UPPERCASE CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." "```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equal ignore case ""RUNNING"" and (machineType contains ""machineTypes/n2d-"" or machineType contains ""machineTypes/c2d-"") and (confidentialInstanceConfig.enableConfidentialCompute does not exist or confidentialInstanceConfig.enableConfidentialCompute is false)```","GCP Compute instances with confidential computing disabled This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nsg-list' AND json.rule = ' $.flowLogsSettings does not exist or $.flowLogsSettings.enabled is false or ($.flowLogsSettings.retentionPolicy.days does not equal 0 and $.flowLogsSettings.retentionPolicy.days less than 90) '```,"Azure Network Watcher Network Security Group (NSG) flow logs retention is less than 90 days This policy identifies Azure Network Security Groups (NSG) for which flow logs retention period is 90 days or less. To perform this check, enable this action on the Azure Service Principal: 'Microsoft.Network/networkWatchers/queryFlowLogStatus/action'. NSG flow logs, a feature of the Network Watcher app, enable you to view information about ingress and egress IP traffic through an NSG. The flow logs include information such as: - Outbound and inbound flows on a per-rule basis. - Network interface to which the flow applies. - 5-tuple information about the flow (source/destination IP, source/destination port, protocol). - Whether the traffic was allowed or denied. As a best practice, enable NSG flow logs and set the log retention period to at least 90 days. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Flow Logs:\n\n1. Log in to the Azure portal.\n2. Select 'Network Watcher'.\n3. Select 'NSG flow logs'.\n4. Select the NSG for which you need to modify the flow log settings.\n5. Set the Flow logs 'Status' to 'On'.\n6. Select the destination 'Storage account'.\n7. Set the 'Retention (days)' to 90 days or greater.\n8. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-access-analyzer' AND json.rule = status equals ACTIVE as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus does not equal not-opted-in as Y; filter '$.X.arn contains $.Y.regionName'; show X; count(X) less than 1```,"AWS IAM Access analyzer is not configured This policy identifies AWS regions in which the IAM Access analyzer is not configured. AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity and identify unintended access to your resources and data. So it is recommended to configure the Access analyzer in all regions in your account. NOTE: Access Analyzer analyzes only policies that are applied to resources in the same AWS Region that it's enabled in. To monitor all resources in your AWS environment, you must create an analyzer to enable Access Analyzer in each Region where you're using supported AWS resources. For more details: https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the IAM dashboard \n4. Go to 'Access analyzer', from the left panel\n5. Click on the 'Create analyzer' button\n6. On the Create analyzer page, enter the parameters as per your requirements.\n7. Click on the 'Create analyzer'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = kind starts with app and config.minTlsVersion is member of ('1.0', '1.1')```","Azure App Service Web app doesn't use latest TLS version This policy identifies Azure web apps that are not configured with the latest version of TLS encryption. Azure Web Apps provide a platform to host and manage web applications securely. Using the latest TLS version is crucial for maintaining secure connections. Older versions of TLS, such as 1.0 and 1.1, have known vulnerabilities that can be exploited by attackers. Upgrading to newer versions like TLS 1.2 or 1.3 ensures that the web app is better protected against modern security threats. It is highly recommended to use the latest TLS version (greater than 1.1) for secure web app connections. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under ""Settings"" section, Click on ""Configuration""\n5. In ""Platform Settings"", Set ""Minimum Inbound TLS Version"" to ""1.2"" or ""1.3""\n6. Click on ""Save"" icon at the top\n7. Click ""Continue"" to save the changes." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createpolicy and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deletepolicy and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updatepolicy) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for IAM policy changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM policy changes. Monitoring and alerting on changes to IAM policies will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity and Access Management (IAM) policies. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Policy – Change Compartment, Policy – Create, Policy - Delete and Policy – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case openshift and state equal ignore case normal and serviceEndpoints.publicServiceEndpointEnabled is true```,"IBM Cloud OpenShift cluster is accessible by using public endpoint This policy identifies IBM Cloud OpenShift clusters which has public service endpoint enabled. If any cluster has public service endpoint enabled, the cluster will be accessible from Internet routable IP address. It is highly recommended to use a private service endpoint instead of a public service endpoint. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: An OpenShift cluster can be made private only at the time of creation. To create a private \nOpenShift cluster follow below URL:\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-cluster-create-vpc-gen2&interface=ui#clusters_vpcg2_ui Please make sure to select 'Private endpoint only' at 'Master service endpoint' section.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = '(resources.applicationLoadBalancer[*] exists or resources.apiGateway[*] exists or resources.other[*] exists) and loggingConfiguration.resourceArn does not exist'```,"AWS Web Application Firewall v2 (AWS WAFv2) logging is disabled This policy identifies Web Application Firewall v2s (AWS WAFv2) for which logging is disabled. Enabling WAFv2 logging, logs all web requests inspected by the service which can be used for debugging and additional forensics. The logs will help to understand why certain rules are triggered and why certain web requests are blocked. You can also integrate the logs with any SIEM and log analysis tools for further analysis. It is recommended to enable logging on your Web Application Firewall v2s (WAFv2). For details: https://docs.aws.amazon.com/waf/latest/developerguide/logging.html#logging-management NOTE: Global (CloudFront) WAFv2 resources are out of scope for this policy. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on your reported WAFv2s, follow below mentioned URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/logging.html#logging-management\n\nNOTE: No additional cost to enable logging on AWS WAFv2 (minus Kinesis Firehose and any storage cost).\nFor Kinesis Firehose or any storage additional charges refer https://aws.amazon.com/cloudwatch/pricing/." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(138,138) or destinationPortRanges[*] contains _Port.inRange(138,138) ))] exists```","Azure Network Security Group allows all traffic on NetBIOS (UDP Port 138) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on NetBIOS UDP port 138. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict NetBIOS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and ingressSettings equals ALLOW_ALL```,"GCP Cloud Function configured with overly permissive Ingress setting This policy identifies GCP Cloud Functions that are configured with overly permissive Ingress setting. With overly permissive Ingress setting, all inbound requests to the function are allowed, from both the public and resources within the same project. It is recommended to restrict the traffic from the public and other resources, to get better network-based access control and allow traffic from VPC networks in the same project or traffic through the Cloud Load Balancer. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings' drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. In 'Ingress settings', select either 'Allow internal traffic only' or 'Allow internal traffic and traffic from Cloud Load Balancing'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'." ```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' as Y; filter '$.X.description.securityGroups[*] contains $.Y.groupId and $.Y.ipPermissionsEgress[*] is empty'; show X;```,"AWS Elastic Load Balancer (ELB) has security group with no outbound rules This policy identifies Elastic Load Balancers (ELB) which have security group with no outbound rules. A security group with no outbound rule will deny all outgoing requests. ELB security groups should have at least one outbound rule, ELB with no outbound permissions will deny all traffic going to any EC2 instances or resources configured behind that ELB; in other words, the ELB is useless without outbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on the security group, it will open Security Group properties in a new tab in your browser\n6. Click on the 'Outbound Rules'\n7. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELB functional requirement\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-target-https-proxies' AND json.rule = 'quicOverride does not contain ENABLE'```,"GCP Load balancer HTTPS target proxy is not configured with QUIC protocol This policy identifies Load Balancer HTTPS target proxies which are not configured with QUIC protocol. Enabling QUIC protocol in load balancer target https proxies adds advantage by establishing connections faster, stream-based multiplexing, improved loss recovery, and eliminates head-of-line blocking. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select 'Enabled' from the dropdown for 'QUIC negotiation'\n11. Click on 'Done'\n12. Click on 'Update'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' and json.rule = state .name contains ""running""```","Khalid Test Policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals ACTIVE and containerDefinitions[*].privileged exists and containerDefinitions[*].privileged is true```,"AWS ECS task definition elevated privileges enabled This policy identifies the ECS containers that are having elevated privileges on the host container instance. When the Privileged parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). Note: This parameter is not supported for Windows containers or tasks using the Fargate launch type. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Create a task definition revision.\n\n1. Open the Amazon ECS console.\n2. From the navigation bar, choose the region that contains your task definition.\n3. In the navigation pane, choose Task Definitions.\n4. On the Task Definitions page, select the box to the left of the task definition to revise and choose Create new revision.\n5. On the Create new revision of Task Definition page, change the existing Container Definitions.\n6. Under Security, uncheck the Privileged box.\n7. Verify the information and choose Update, then Create.\n8. If your task definition is used in a service, update your service with the updated task definition.\n9. Deactivate previous task definition." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-neptune-db-cluster' AND json.rule = Status contains available and IAMDatabaseAuthenticationEnabled is false```,"AWS Neptune Cluster not configured with IAM authentication This policy identifies AWS Neptune clusters that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication ensures the network traffic to and from database clusters is encrypted using Secure Sockets Layer (SSL), provides central access management to your database resources and enforces use of profile credentials instead of a password, for greater security. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable IAM authentication for AWS Neptune cluster follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-enable.html." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = type equals application and listeners[?any(protocol equals HTTPS and sslPolicy exists and sslPolicy is not member of ('ELBSecurityPolicy-TLS13-1-2-2021-06','ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04'))] exists```","AWS Application Load Balancer (ALB) is not using the latest predefined security policy This policy identifies Application Load Balancers (ALBs) are not using the latest predefined security policy. A security policy is a combination of protocols and ciphers. The protocol establishes a secure connection between a client and a server and ensures that all data passed between the client and your load balancer is private. A cipher is an encryption algorithm that uses encryption keys to create a coded message. So it is recommended to use the latest predefined security policy which uses only secured protocol and ciphers. We recommend using either non-FIPS security policy ELBSecurityPolicy-TLS13-1-2-2021-06 or FIPS security policy ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04 to meet compliance and security standards that require disabling certain TLS protocol versions or to support legacy clients that require deprecated ciphers. For more details: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n\n4. Click on the reported Load Balancer\n\n5. On the 'Listeners' tab, Choose the 'HTTPS' or 'SSL' rule\n\n6. Click on 'Edit Listener' in the 'Manage listener' dropdown, Change 'Security policy' to 'ELBSecurityPolicy-TLS13-1-2-2021-06' (non-FIPS) or 'ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04' (FIPS) to meet compliance and security standards that require disabling certain TLS protocol versions or to support legacy clients that require deprecated ciphers.\n\n7. Click on 'Update' to save your changes." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = zone exists and locations[*] size less than 3```,"GCP Kubernetes cluster not in redundant zones Putting resources in different zones in a region provides isolation from many types of infrastructure, hardware, and software failures. This policy alerts if your cluster is not located in at least 3 zones. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Add zones to your zonal cluster.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. From the Additional zones section, select the desired zones.\n4. Click Save.." "```config from cloud.resource where api.name = 'aws-iam-list-roles' AND json.rule = role.assumeRolePolicyDocument.Statement[*].Action contains ""sts:AssumeRoleWithWebIdentity"" and role.assumeRolePolicyDocument.Statement[*].Principal.Federated contains ""cognito-identity.amazonaws.com"" and role.assumeRolePolicyDocument.Statement[*].Effect contains ""Allow"" and role.assumeRolePolicyDocument.Statement[*].Condition contains ""cognito-identity.amazonaws.com:amr"" and role.assumeRolePolicyDocument.Statement[*].Condition contains ""unauthenticated"" as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Action contains :* and Resource equals * )] exists as Y; filter ""($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action contains :* ) or ($.X.attachedPolicies[*].policyArn intersects $.Y.policyArn)""; show X;```","AWS Cognito service role with wide privileges does not validate authentication This policy identifies the AWS Cognito service role that has wide privileges and does not validate user authentication. AWS Cognito is an identity and access management service for web and mobile apps. AWS Cognito service roles define permissions for AWS services accessing resources. The 'amr' field in the service role represents how the user was authenticated. if the user was authenticated using any of the supported providers, the 'amr' will contain 'authenticated' and the name of the provider. Not validating the 'amr' field can allow an unauthenticated user (guest access) with a valid token signed by the identity-pool to assume the Cognito role. If this Cognito role has a '*' wildcard in the action and resource, it could lead to lateral movement or unauthorized access. Ensuring limiting privileges according to business requirements can help in restricting unauthorized access and misuse of resources. It is recommended to limit the Cognito service role used for guest access to not have a '*' wildcard in the action or resource. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: To remove the policy which have excessive permission from the guess access role,\n1. Log in to the AWS console.\n2. Navigate to the IAM service.\n3. Click on Roles.\n4. Click on the reported IAM role.\n5. Under 'Permissions policies' section, remove the policy having excessive permissions and assign a limited permission policy as required for a particular role.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = 'notebookInstanceStatus equals InService and kmsKeyId does not exist'```,"AWS SageMaker notebook instance not configured with data encryption at rest using KMS key This policy identifies SageMaker notebook instances that are not configured with data encryption at rest using the AWS managed KMS key. It is recommended to implement encryption at rest in order to protect data from unauthorized entities. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance can not be configured with data encryption at rest once it is created. You need to create a new notebook instance with encryption at rest using the KMS key; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, From the 'Permissions and encryption' section, \nselect the KMS key from the 'Encryption key - optional' dropdown list. If no KMS keys already, you have to create a KMS key first.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu and, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete dialog box, click the Delete button to confirm the action.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and settings.databaseFlags[?(@.name=='log_min_messages')] does not exist""```","GCP PostgreSQL instance database flag log_min_messages is not set This policy identifies PostgreSQL database instances in which database flag log_min_messages is not set. The log_min_messages flag controls which message levels are written to the server log, valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each level includes all the levels that follow it. log_min_messages flag value changes should only be made in accordance with the organization's logging policy. Auditing helps in troubleshooting operational problems and also permits forensic analysis. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5.Under 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_min_messages' from the drop-down menu and set the value in accordance with your organization's logging policy.\n6. Click Save." ```config from cloud.resource where api.name = 'oci-cloudguard-configuration' AND json.rule = status does not equal ignore case ENABLED```,"OCI Cloud Guard is not enabled in the root compartment of the tenancy This policy identifies the absence of OCI Cloud Guard enablement in the root compartment of the tenancy. OCI Cloud Guard is a vital service that detects misconfigured resources and insecure activities within an OCI tenancy. It offers security administrators visibility to identify and resolve these issues promptly. Cloud Guard not only detects but also suggests, assists, or takes corrective actions to mitigate security risks. By enabling Cloud Guard in the root compartment of the tenancy with default configuration, activity detectors, and responders, administrators can proactively monitor and secure their OCI resources against potential security threats. As best practice, it is recommended to have Cloud Guard enabled in the root compartment of your tenancy. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the OCI Cloud Guard setting, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/cloud-guard/using/part-start.htm#cg-access-enable." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = ""(policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-TLSv1) or (policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-SSLv3) or (policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-TLSv1.1)""```","AWS Elastic Load Balancer (Classic) SSL negotiation policy configured with vulnerable SSL protocol This policy identifies Elastic Load Balancers (Classic) which are configured with SSL negotiation policy containing vulnerable SSL protocol. The SSL protocol establishes a secure connection between a client and a server and ensures that all the data passed between the client and your load balancer is private. As a security best practice, it is recommended to use the latest version SSL protocol. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Click on the reported Load Balancer\n6. On 'Listeners' tab, Click on 'Edit' button\n7. On 'Edit Listeners' popup for rule 'HTTPS/SSL',\n- If your cipher is 'Predefined Security Policy', change 'Cipher' to 'ELBSecurityPolicy-TLS-1-2-2017-01 or latest'\nOR\n- If your cipher is 'Custom Security Policy', Choose 'Protocol-TLSv1.2' only on 'SSL Protocols' section\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals CosmosDbs and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud set to Off for Cosmos DB This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Cosmos DB set to Off. Enabling Azure Defender for the cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders. It is highly recommended to enable Azure Defender for Cosmos DB. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click 'Select types >' in the row for 'Databases'\n7. Set the radio button next to 'Azure Cosmos DB' to 'On'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' and api.name= 'aws-es-describe-elasticsearch-domain' AND json.rule = serviceSoftwareOptions.updateAvailable exists and serviceSoftwareOptions.updateAvailable is true```,"AWS OpenSearch domain does not have the latest service software version This policy identifies Amazon OpenSearch Service domains that have service software updates available but not installed for the domain. Amazon OpenSearch Service is a managed solution for deploying, managing, and scaling OpenSearch clusters. Service software updates deliver the most recent platform fixes, enhancements, and features for the environment, ensuring domain security and availability. To minimize service disruption, it's advisable to schedule updates during periods of low domain traffic. It is recommended to keep OpenSearch regularly updated to maintain system security, while also accessing the latest features and improvements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To request a service software update for an Amazon OpenSearch Service, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, under 'Analytics', select 'Amazon OpenSearch Service'\n4. Select the reported domain name\n5. Under 'Actions', under 'Service software update', click on 'Update' and select one of the following options:\n\na. Apply update now - Immediately schedules the action to happen in the current hour if there's capacity available. If capacity isn't available, we provide other available time slots to choose from\n\nb. Schedule it in off-peak window - Only available if the off-peak window is enabled for the domain. Schedules the update to take place during the domain's configured off-peak window. There's no guarantee that the update will happen during the next immediate window. Depending on capacity, it might happen in subsequent days\n\nc. Schedule for specific date and time - Schedules the update to take place at a specific date and time. If the time that you specify is unavailable for capacity reasons, you can select a different time slot\n\n6. Choose 'Confirm'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals available and ( BackupRetentionPeriod does not exist or BackupRetentionPeriod less than 7 )```,"AWS DocumentDB clusters have backup retention period less than 7 days This policy identifies Amazon DocumentDB clusters lacking sufficient backup retention tenure. Amazon DocumentDB clusters are managed database services on AWS, compatible with MongoDB. They handle tasks like provisioning and backup. With features like automated backups and read replicas, they offer a reliable solution for MongoDB workloads in the cloud. The backup retention period denotes the duration for storing automated backups of the DocumentDB cluster. Inadequate retention periods heighten the risk of data loss, compliance issues, and hinder effective recovery in security breaches or system failures. It is recommended to ensure a backup retention period of at least 7 days or according to your business and compliance requirement. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify an Amazon DocumentDB cluster's backup retention period:\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region dropdown in the top right corner where the alert is generated.\n3. Navigate to the Amazon DocumentDB console by either searching for 'Amazon DocumentDB' in the AWS services search bar or directly accessing the Amazon DocumentDB service.\n4. In the navigation pane, choose 'Clusters' and select the cluster name that is reported.\n5. Click 'Actions' in the right corner, and then select 'Modify' from the drop-down menu.\n6. On the Modify cluster page, under the 'Backup' section, select the desired backup retention period in days from the 'Backup retention period' drop-down menu based on your business or compliance requirements.\n7. Click 'Continue' to review a summary of your changes.\n8. Choose either 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your scheduling preference for modifications.\n9. Click on 'Modify Cluster' to implement the changes.." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ""((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false)))) and websiteConfiguration does not exist""```","AWS S3 buckets are accessible to public via ACL This policy identifies S3 buckets which are publicly accessible via ACL. Amazon S3 often used to store highly sensitive enterprise data and allowing public access to such S3 bucket through ACL would result in sensitive data being compromised. It is highly recommended to disable ACL configuration for all S3 buckets and use resource based policies to allow access to S3 buckets. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case Succeeded AND properties.properties.connectivityEndpoints.publicIpAddress exists AND properties.properties.connectivityEndpoints.publicIpAddress does not equal ignore case ""null""```","Azure Machine learning compute instance configured with public IP This policy identifies Azure Machine Learning compute instances which are configured with public IP. Configuring an Azure Machine Learning compute instance with a public IP exposes it to significant security risks, including unauthorized access and cyber-attacks. This setup increases the likelihood of data breaches, where sensitive information and intellectual property could be accessed by unauthorized individuals, leading to potential data leakage and loss. As a best practice, it is recommended not to configure Azure Machine Learning instances with public IP. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Disabling a public IP address on an existing Azure Machine Learning compute instance is not supported without deleting and recreating the instance. To secure your instance, it’s recommended to configure it without a public IP from the start. Additionally, to update an existing Azure Machine Learning workspace to use a managed virtual network, all compute resources (including compute instances, compute clusters, and managed online endpoints) must first be deleted.\n\nTo create a new compute instance with no public IP:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. Under 'Settings' go to 'Networking' section\n5. At the top, select the 'Workspace managed outbound access' tab\n6. Select either 'Allow Internet Outbound' or 'Allow Only Approved Outbound' based on your requirements, if one hasn't been chosen already\n7. Click on 'Save'\n8. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n9. A new tab will open for Azure ML Studio\n10. In the left panel, under 'Manage' section, click on the 'Compute'\n11. Click 'New' to create a new compute instance\n12. In the 'Security' tab, under the 'Virtual network' section, enable the 'No public IP' option to disable the public IP\n13. Select 'Review + Create' to create the compute instance." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ownershipControls.rules[*] does not contain BucketOwnerEnforced```,"AWS S3 bucket access control lists (ACLs) in use This policy identifies AWS S3 buckets which are using access control lists (ACLs). ACLs are legacy way to control access to S3 buckets. It is recommended to disable bucket ACL and instead use IAM policies or S3 bucket policies to manage access to your S3 buckets. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions' tab\n5. Under 'Object Ownership' click 'Edit'\n6. Select 'ACLs disabled (recommended)'\n7. Click on 'Save changes'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and masterAuthorizedNetworksConfig.enabled does not equal ""true""```","GCP Kubernetes Engine Clusters have Master authorized networks disabled This policy identifies Kubernetes Engine Clusters which have disabled Master authorized networks. Enabling Master authorized networks will let the Kubernetes Engine block untrusted non-GCP source IPs from accessing the Kubernetes master through HTTPS. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below link for enabling Master authorized networks feature on kubernetes clusters,\nLink: https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#add." ```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = associatePublicIpAddress exists and associatePublicIpAddress is true```,"AWS Auto Scaling group launch configuration has public IP address assignment enabled This policy identifies the autoscaling group launch configuration that is configured to assign a public IP address. Auto Scaling groups assign a public IP address to the group's ec2 instances if its associated launch configuration is configured to assign a public IP address. Amazon EC2 instances should only be accessible from behind a load balancer instead of being directly exposed to the internet. It is recommended that the Amazon EC2 instances in an autoscaling group launch configuration do not have an associated public IP address except for limited edge cases. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: An Auto Scaling group is associated with one launch configuration at a time. You cannot modify a launch configuration after you have created it. To change the launch configuration for an Auto Scaling group, You need to use an existing launch configuration as the basis for a new launch configuration first. Then, update the Auto Scaling group to use the new launch configuration before you delete the reported Auto Scaling group configuration.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the IP address type, choose 'Do not assign a public IP address to any instances'.\n6. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n7. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n8. Select the check box next to the Auto Scaling group.\n9. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n10. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n11. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n12. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances, either terminate them so that they are replaced by your Auto Scaling group or allow automatic scaling to gradually replace older instances with newer instances based on your termination policies.\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-virtual-mfa-devices' AND json.rule = 'serialNumber contains root-account-mfa-device and user.arn contains root'```,"AWS root account configured with Virtual MFA This policy identifies AWS root accounts which are configured with Virtual MFA. Root is an important role in your account and root accounts must be configured with hardware MFA. Hardware MFA adds extra security because it requires users to type a unique authentication code from an approved authentication device when they access AWS websites or services. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: To manage MFA devices for your AWS account, you must use your root user credentials to sign in to AWS. You cannot manage MFA devices for the root user while signed in with other credentials.\n\n1. Sign in to the AWS Management Console with your root user credentials\n2. Go to IAM\n3. Do one of the following:\nOption 1: Choose Dashboard, and under Security Status, expand Activate MFA on your root account.\nOption 2: On the right side of the navigation bar, select your account name, and then choose My Security Credentials. If necessary, choose Continue to Security Credentials. Then expand the Multi-Factor Authentication (MFA) section on the page.\n4. Choose Manage MFA or Activate MFA, depending on which option you chose in the preceding step.\n5. In the wizard, choose A hardware MFA device and then choose Next Step.\n6. If you have U2F security key as hardware MFA device, choose U2F security key and click on Continue. Next plug the USB U2F security key, when setup is complete click on Close.\nIf you have any other hardware MFA device, choose Other hardware MFA device option\na. In the Serial Number box, type the serial number that is found on the back of the MFA device.\nb. In the Authentication Code 1 box, type the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number.\nc. Wait 30 seconds while the device refreshes the code, and then type the next six-digit number into the Authentication Code 2 box. You might need to press the button on the front of the device again to display the second number.\nd. Choose Next Step. The MFA device is now associated with the AWS account.\n\nImportant:\nSubmit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can resync the device.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case ""Running"" AND kind contains ""functionapp"" AND kind does not contain ""workflowapp"" AND kind does not equal ""app"" AND config.minTlsVersion does not equal ""1.2""```","Azure Function App doesn't use latest TLS version This policy identifies Azure Function App which are not set with latest version of TLS encryption. Azure currently allows the Function App to set TLS versions 1.0, 1.1 and 1.2. It is highly recommended to use the latest TLS 1.2 version for Function App secure connections. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'TLS/SSL settings'\n5. In 'Protocol Settings', Set 'Minimum TLS Version' to '1.2'." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = (securityRules[?any((((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals ""all"") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))) and (source equals 0.0.0.0/0 and direction equals INGRESS))] exists)```","OCI security group allows unrestricted ingress access to port 22 This policy identifies OCI Security groups that allow unrestricted ingress access to port 22. It is recommended that no security group allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Security Rules\n5. If you want to add a rule, click Add Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```,"sailesh of liron's policy #4 This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Azure PostgreSQL servers not configured with private endpoint This policy identifies Azure PostgreSQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure PostgreSQL database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for Postgres servers'\n3. Click on the reported Postgres server instance you want to modify \n4. Select 'Networking' under 'Settings' from left panel \n5. Under 'Private endpoint', click on Add private endpoint' to create a add add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/postgresql/single-server/how-to-configure-privatelink-portal." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of (""allAuthenticatedUsers"",""allUsers""))] exists```","mkurter clone of GCP Cloud Function is publicly accessible This policy identifies GCP Cloud Functions that are publicly accessible. Allowing 'allusers' / 'allAuthenticatedUsers' to cloud functions can lead to unauthorised invocations of the functions or unwanted access to sensitive information. It is recommended to follow least privileged access policy and grant access restrictively. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to GCP console\n2. Navigate to service 'Cloud Functions'\n3. Click on the function on which the alert is generated\n4. Go to tab 'PERMISSIONS'\n5. Review the roles to see if 'allusers'/'allAuthenticatedUsers' is present\n6. Click on the delete icon to revoke access from 'allusers'/'allAuthenticatedUsers'\n7. On Pop-up select the check box to confirm \n8. Click on 'REMOVE'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-describe-cluster' AND json.rule = 'status.state does not contain TERMINATING and securityConfiguration does not exist'```,"AWS EMR cluster is not configured with security configuration This policy identifies EMR clusters which are not configured with security configuration. With Amazon EMR release version 4.8.0 or later, you can use security configurations to configure data encryption, Kerberos authentication, and Amazon S3 authorization for EMRFS. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. Follow below link to configure a security configuration\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-create-security-configuration.html\n8. Click on 'Create' button\n9. On the left menu of EMR dashboard Click 'Clusters'\n10. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n11. In the Cloning popup, choose 'Yes' and Click 'Clone'\n12. On the Create Cluster page, in the Security Options section, click on 'security configuration'\n13. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n14. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it\n15. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n16. Click on the 'Terminate' button from the top menu\n17. On the 'Terminate clusters' pop-up, click 'Terminate'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = '(lastAccessedDate does not exist and _DateTime.ageInDays(createdDate) > 90) or (lastAccessedDate exists and _DateTime.ageInDays(lastAccessedDate) > 90)'```,"AWS Secrets Manager secret not used for more than 90 days This policy identifies the AWS Secrets Manager secret not accessed within 90 days. AWS Secrets Manager securely stores and manages sensitive information like API keys, passwords, and certificates. Leaving unused secrets in AWS Secrets Manager increases the risk of security breaches by providing unnecessary access points for attackers, potentially leading to unauthorized data access or leaks. It is recommended to routinely review and delete unused secrets to reduce the attack surface and potential for unauthorized access. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete an unused AWS Secrets Manager secret, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Security, Identity, & Compliance', select 'Secrets Manager'\n4. Select the reported Secrets Manager secret\n5. In the Secret details section, choose 'Actions', and then choose 'Delete secret'\n6. In the Disable secret and schedule deletion dialog box, in Waiting period, enter the number of days to wait before the deletion becomes permanent.\n7. Choose 'Schedule deletion'." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1434,1434)""```","Alibaba Cloud Security group allow internet traffic to MS SQL Monitor port (1434) This policy identifies Security groups that allow inbound traffic on MS SQL Monitor port (1434) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1434, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-resource-group' AND json.rule = locks.* size equals 0```,"Azure Resource Group does not have a resource lock Azure Resource Manager locks provide a way to lock down Azure resources from being deleted or modified. The lock level can be set to either 'CanNotDelete' or 'ReadOnly'. When you apply a lock at a parent scope, all resources within the scope inherit the same lock, and the most restrictive lock takes precedence. This policy identifies Azure Resource Groups that do not have a lock set. As a best practice, place a lock on important resources to prevent accidental or malicious modification or deletion by unauthorized users. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Resource groups' dashboard\n3. Select the resource group that you want to lock\n4. Select 'Locks' under 'Settings' from left panel, then click on 'Add'\n5. Specify the lock name and type\n6. Select on 'OK' to save your changes." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule = 'requireNumbers contains false and requireSymbols contains false and expirePasswords contains false and allowUsersToChangePassword contains false and requireLowercaseCharacters contains false and requireUppercaseCharacters contains false and maxPasswordAge does not exist and passwordReusePrevention does not exist and minimumPasswordLength==6'```,"AWS IAM Password policy is unsecure Checks to ensure that IAM password policy is in place for the cloud accounts. As a security best practice, customers must have strong password policies in place. This policy ensures password policies are set with all following options: - Minimum Password Length - At least one Uppercase letter - At least one Lowercase letter - At least one Number - At least one Symbol/non-alphanumeric character - Users have permission to change their own password - Password expiration period - Password reuse - Password expiration requires administrator reset This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'IAM' Service\n2. Click on 'Account Settings'\n3. Under 'Password Policy', select and set all the options\n4. Click on 'Apply password policy'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'listenerPortsAndProtocal[*].listenerProtocal equals http'```,"Alibaba Cloud SLB listener that allow connection requests over HTTP This policy identifies Server Load Balancer (SLB) listeners that are configured to accept connection requests over HTTP instead of HTTPS. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the server load balancer. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Once Load balancer listener created we can not modify its protocol. So to resolve this alert, delete the existing HTTP Listener and create a new listener with HTTPS protocol.\n\nTo create a new HTTPS Listener follow:\n1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, click on 'Add Listener'\n5. Select 'Select Listener Protocol' as 'HTTPS' and other parameters as per your requirement.\n6. Click on 'Next' \n7. Choose 'SSL Certificates', 'Backend Servers' and 'Health Check' sections parameters accordingly and Click on 'Submit'\n\nTo delete existing HTTP Listener follow:\n1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, Choose HTTP Listener, Click on 'More' and select 'Remove'\n5. Click on 'OK'." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Icmp or protocol equals *) and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```,"Azure Network Security Group allows all traffic on ICMP (Ping) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on ICMP (Ping) protocol. ICMP is used by devices to communicate error messages and status. While ICMP is useful for diagnostics and troubleshooting, it can also be used to exploit or disrupt systems. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict ICMP (Ping) solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and properties.ipAddress.type exists and properties.ipAddress.type equals Public```,"Azure Container Instance is not configured with virtual network This policy identifies Azure Container Instances (ACI) that are not configured with a virtual network. Making container instances public makes an internet routable network. By deploying container instances into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. So it is recommended to configure all your container instances within a virtual network. For more details: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Virtual network can only be configured at the time of container instance creation. Hence, it is suggested to delete an existing container instance having not configured with virtual network and create a new container instance having virtual network configured with secure values.\nNote: Backup or migrate data from the container instance before deleting it.\n\nTo create a Container Instance within a virtual network; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet\n\nTo delete a reported Container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal#clean-up-resources." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-file-system' as X; config from cloud.resource where api.name = 'oci-file-storage-export' AND json.rule = (exportOptions[?any(source equals 0.0.0.0/0 and requirePrivilegedSourcePort is false and access equals READ_WRITE and identitySquash equals NONE)] exists) as Y; filter '($.X.id equals $.Y.fileSystemId)';show X;```,"OCI File Storage File System Export is publicly accessible This policy identifies the OCI File Storage File Systems Exports that are publicly accessible. Monitoring and alerting on publicly accessible file systems exports will help in identifying changes to the security posture and thus reduces risk for sensitive data being leaked. It is recommended that no File System exports be publicly accessible. FMI : https://docs.cloud.oracle.com/en-us/iaas/Content/File/Tasks/exportoptions.htm#scenarios This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the alerted Export Path from the list of Exports\n5. Click on the Edit NFS Export Options\n6. Edit the export options to make it more restrictive\n7. Click Update." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(23,23) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on Telnet port (23) This policy identifies GCP Firewall rules which allow all inbound traffic on Telnet port (23). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the Telnet port (23) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = lifecyclePolicy does not exist```,"AWS ECR Repository not configured with a lifecycle policy This policy identifies AWS ECR Repositories that are not configured with a lifecycle policy. Amazon ECR lifecycle policies enable you to specify the lifecycle management of images in a repository. This helps to automate the cleanup of unused images and the expiration of images based on age or count. As best practice, it is recommended to configure ECR repository with lifecycle policy which helps to avoid unintentionally using outdated images in your repository. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AWS ECR Repository with a lifecycle policy follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/AmazonECR/latest/userguide/lpp_creation.html." "```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = clientToken is not empty AND monitoring.state contains ""running""```","vv15_2 This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'oci-analytics-instance' AND json.rule = lifecycleState equal ignore case ACTIVE AND networkEndpointDetails.networkEndpointType equal ignore case PUBLIC AND (networkEndpointDetails.whitelistedServices is empty AND networkEndpointDetails.whitelistedIps is empty AND networkEndpointDetails.whitelistedVcns is empty)```,"OCI Oracle Analytics Cloud (OAC) access is not restricted to allowed sources or deployed within a Virtual Cloud Network This policy identifies Oracle Analytics Cloud (OAC) instances that are not restricted to specific sources or not deployed within a Virtual Cloud Network (VCN). OAC is a scalable service for enterprise analytics, and restricting its access to corporate IP addresses or VCNs enhances security by reducing exposure to unauthorized access. Deploying OAC instances within a VCN and implementing access control rules is essential for protecting sensitive data. This ensures that only authorized sources can connect to OAC, mitigating risks and maintaining data integrity. As best practice, it is recommended to have new OAC instances deployed within a VCN, and existing instances should have access control rules configured to allow only approved sources. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To configure the OCI Oracle Analytics Cloud (OAC) access, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/analytics-cloud/doc/manage-service-access-and-security.html#ACOCI-GUID-08739F8B-13EC-4194-8EEF-58664F2C1178." "```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-sql-instances-list' AND json.rule = state equals ""RUNNABLE"" and diskEncryptionConfiguration.kmsKeyName does not exist```","GCP SQL Instance not encrypted with CMEK This policy identifies GCP SQL Instances that are not encrypted with Customer Managed Encryption Keys (CMEK). Using CMEK for SQL Instances provides greater control over data at rest encryption by allowing key rotation and revocation, which enhances security and helps meet compliance requirements. Encrypting SQL Instances with CMEK ensures better data privacy management. It is recommended to use CMEK for SQL Instance encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP SQL Instance encryption cannot be changed after creation. To make use of CMEK a new SQL Instance can be created.\n\nTo create a new SQL Instance with CMEK, please follow the steps below:\n1. Login to the GCP console\n2. Navigate to the 'SQL' service\n3. Click 'CREATE INSTANCE'\n4. Select the database engine\n5. Under 'Customize your instance', expand 'SHOW CONFIGURATION OPTIONS'\n6. Expand 'STORAGE'\n7. Expand 'ADVANCED ENCRYPTION OPTIONS'\n8. Select 'Cloud KMS key'\n9. Select the appropriate 'Key type' and then select the required CMEK\n10. Configure the rest of the SQL instance as required\n11. Click 'CREATE INSTANCE' at the bottom of the page." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ""acl.grantsAsList[?(@.grantee=='AllUsers')].permission contains ReadAcp or acl.grantsAsList[?(@.grantee=='AllUsers')].permission contains FullControl""```","AWS S3 bucket has global view ACL permissions enabled This policy determines if any S3 bucket(s) has Global View ACL permissions enabled for the All Users group. These permissions allow external resources to see the permission settings associated to the object. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Go to the AWS console S3 dashboard.\n2. Select your bucket by clicking on the bucket name.\n3. Select the Permissions tab and 'Access Control List.'\n4. Under Public Access, select Everyone.\n5. In the popup window, under Access to this bucket's ACL, uncheck 'Read bucket permissions' and Save.." "```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-route-tables' AND json.rule = associations[*].subnetId exists and routes[?any( state equals active and gatewayId starts with igw- and (destinationCidrBlock equals ""0.0.0.0/0"" or destinationIpv6CidrBlock equals ""::/0""))] exists as Y; filter '$.X.dbsubnetGroup.subnets[*].subnetIdentifier intersects $.Y.associations[*].subnetId'; show X;```","AWS RDS instance not in private subnet This policy identifies AWS RDS instance which are not in a private subnet. RDS should not be deployed in a public subnet, production databases should be located behind a DMZ in a private subnet with limited access in most scenarios. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To resolve this alert, you should redeploy RDS into a private RDS Subnet group.\n\nNote: You can not move an existing RDS instance from one subnet to another.\n\nCreate a RDS Subnet group:\n\nA DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances.\n\n1. Open the Amazon RDS console\n2. In the navigation pane, choose 'Subnet groups'\n3. Choose 'Create DB Subnet Group'\n4. Type the 'Name' of your DB subnet group\n5. Add a 'Description' for your DB subnet group\n6. Choose your 'VPC'\n7. Choose 'Availability Zones'\n8. In the Add subnets section, add your Private subnets related to this VPC\n9. Choose Create\n\nWhen creating your RDS DB, under Configure advanced settings, choose the Subnet group created above.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = status equal ignore case ""available"" and snapshotRetentionLimit does not exist or snapshotRetentionLimit < 1```","AWS ElastiCache Redis cluster is not configured with automatic backup This policy identifies Amazon ElastiCache Redis clusters where automatic backup is disabled by checking if SnapshotRetentionLimit is less than 1. Amazon ElastiCache for Redis clusters can back up their data. Automatic backups in ElastiCache Redis clusters ensure data durability and enable point-in-time recovery, protecting against data loss or corruption. Without backups, data loss from breaches or corruption could be irreversible, compromising data integrity and availability. It is recommended to enable automatic backups to adhere to compliance requirements and enhance security measures, ensuring data integrity and resilience against potential threats. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on 'Redis caches' under the 'Resources' section\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box, Under the 'Backup' section \na. Select 'Enable automatic backups'\nb. Select the 'Backup node ID' that is used as the daily backup source for the cluster\nc. Select the 'Backup retention period' number of days according to your buissness requirements for which automated backups are retained before they're automatically deleted\nd. select the 'Backup start time' and 'Backup duration' according to your requirements\n\n8. Click on 'Preview Changes'\n9. Select Yes checkbox under 'Apply Immediately' , to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\n10. Click on 'Modify'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = listeners[?any(sslPolicy contains ELBSecurityPolicy-TLS-1-0-2015-04)] exists```,"AWS Elastic Load Balancer v2 (ELBv2) SSL negotiation policy configured with weak ciphers This policy identifies Elastic Load Balancers v2 (ELBv2) which are configured with SSL negotiation policy containing weak ciphers. An SSL cipher is an encryption algorithm that uses encryption keys to create a coded message. SSL protocols use several SSL ciphers to encrypt data over the Internet. As many of the other ciphers are not secure/weak, it is recommended to use only the ciphers recommended in the following AWS link: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On the 'Listeners' tab, Choose the 'HTTPS' or 'SSL' rule; Click on 'Edit', Change 'Security policy' to other than 'ELBSecurityPolicy-TLS-1-0-2015-04' as it contains DES-CBC3-SHA cipher, which is a weak cipher.\n6. Click on 'Update' to save your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```,"liron's policy #4 This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-list-streams' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.Y.keyMetadata.keyManager == AWS and $.Y.key.keyArn == $.X.keyId and $.X.encryptionType equals KMS'; show X;```,"AWS Kinesis streams encryption using default KMS keys instead of Customer's Managed Master Keys This policy identifies the AWS Kinesis streams which are encrypted with default KMS keys and not with Master Keys managed by Customer. It is a best practice to use customer managed Master Keys to encrypt your Amazon Kinesis streams data. It gives you full control over the encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Kinesis Service\n3. Select the reported Kinesis data stream for the corresponding region\n4. Under Server-side encryption, Click on Edit\n5. Choose Enabled\n6. Under KMS master key, You can choose any KMS other than the default (Default) aws/kinesis\n7. Click Save." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'loggingService does not exist or loggingService equals none'```,"GCP Kubernetes Engine Clusters have Cloud Logging disabled This policy identifies Kubernetes Engine Clusters which have disabled Cloud Logging. Enabling Cloud Logging will let the Kubernetes Engine to collect, process, and store your container and system logs in a dedicated persistent data store. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to 'Kubernetes Engine' (Left Panel)\n3. Select 'Clusters'\n4. From the list of clusters, click on the reported cluster\n5. Under 'Features', click on the edit button (pencil icon) in front of 'Cloud Logging'\n6. In the 'Edit Cloud Logging' dialog, enable the 'Enable Cloud Logging' checkbox\n7. Select components to be logged\n8. Click on 'Save Changes'." "```config from cloud.resource where api.name = 'aws-rds-db-cluster' AND json.rule = engine equals ""aurora-mysql"" and status equals ""available"" as X; config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = DBParameterGroupFamily contains ""aurora-mysql"" as Y; filter '$.X.dBclusterParameterGroupArn equals $.Y.DBClusterParameterGroupArn and (($.Y.parameters.server_audit_logging.ParameterValue does not exist or $.Y.parameters.server_audit_logging.ParameterValue equals 0) or ($.X.enabledCloudwatchLogsExports does not contain ""audit"" and $.Y.parameters.server_audit_logs_upload.ParameterValue equals 0))' ; show X;```","AWS Aurora MySQL DB cluster does not publish audit logs to CloudWatch Logs This policy identifies AWS Aurora MySQL DB cluster where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. Aurora MySQL DB cluster integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While the Aurora MySQL DB cluster provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the Aurora MySQL DB cluster to enable audit logs and publish audit logs to CloudWatch This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Parameter groups'\n5. Choose 'Create parameter group'\n6. The Create parameter group window appears\n\n 6a. In the 'Parameter group name' box, enter the name of the new DB cluster parameter group.\n 6b. In the 'Description' box, enter a description for the new DB cluster parameter group.\n 6c. In the 'Engine type' drop-down, select the engine type (Aurora MySQL)\n 6d. In the 'Parameter group family' list, select a DB parameter group family\n 6e. In the Type list, select 'DB cluster Parameter Group'.\n\n7. Choose 'Create'\n\nTo modify the custom DB cluster parameter group to enable audit logging, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Parameter groups'\n5. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify.\n6. Choose 'Actions', and then choose 'Edit' to modify your Parameter group. \n7. Change the value of the 'server_audit_logging' parameter to '1' in the value drop-down and click 'Save Changes' for enabling audit logs.\n\nTo modify an AWS Aurora MySQL DB Cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Databases'\n5. Choose the reported cluster that you want to associate your parameter group with. Choose 'Modify' to modify your cluster \n6. Under 'Additional configuration', select the above-created cluster parameter group from the 'DB cluster parameter group' dropdown\n7. Choose 'Continue' and check the summary of modifications\n8. Under the 'Schedule modifications' section, select 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your requirements for when to apply modifications\n9. Choose 'Modify cluster' to save your changes\n\nTo modify an AWS Aurora MySQL DB Cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Databases'\n5. Choose the reported cluster that you want to associate your parameter group with. Choose 'Modify' to modify your cluster\n6. In the 'Log exports' section, choose the 'Audit log' to start publishing to CloudWatch Logs\n7. Choose 'Continue' and check the summary of modifications\n8. Under the 'Schedule modifications' section, select 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your requirements for when to apply modifications\n9. Choose 'Modify cluster' to save your changes." ```config from cloud.resource where api.name = 'gcloud-bigquery-table' AND json.rule = encryptionConfiguration.kmsKeyName does not exist```,"GCP BigQuery Table not encrypted with CMEK This policy identifies GCP BigQuery tables that are not encrypted with Customer Managed Encryption Keys (CMEK). CMEK for BigQuery tables provides control over the encryption of data at rest. Encrypting BigQuery tables with CMEK enhances security by giving you full control over encryption keys. This ensures data protection, especially for sensitive models and predictions. CMEK allows key rotation and revocation, aligning with compliance requirements and offering better data privacy management. It is recommended to use CMEK for BigQuery table encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure a Customer-managed encryption key (CMEK) for BigQuery Table, use following command for ""bq"" utility\nbq cp -f --destination_kms_key \n\nPlease refer to URL mentioned below for more details on how to change table from default encryption to CMEK encryption:\nhttps://cloud.google.com/bigquery/docs/customer-managed-encryption#change_to_kms\n\nPlease refer to URL mentioned below for more details on the bq update command:\nhttps://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_cp." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and sqlEncryptionProtectors[*].kind does not exist```,"Azure SQL server Transparent Data Encryption (TDE) encryption disabled This policy identifies SQL servers in which Transparent Data Encryption (TDE) is disabled. TDE encryption performs real-time encryption and decryption of the server, related reinforcements, and exchange log records without requiring any changes to the application. It is recommended to have TDE encryption on your SQL servers to protect the server from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'SQL servers'\n3. Select the reported SQL server instance you want to modify\n4. Select 'Transparent data encryption' under 'Security'\n5. Select 'Select a key'\n6. Click on 'Save'." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(21,21)""```","Alibaba Cloud Security group allow internet traffic to FTP port (21) This policy identifies Security groups that allow inbound traffic on FTP port (21) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 21, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-list-streams' AND json.rule = 'encryptionType equals NONE or encryptionType does not exist'```,"AWS Kinesis streams are not encrypted using Server Side Encryption This Policy identifies the AWS Kinesis streams which are not encrypted using Server Side Encryption. Server Side Encryption is used to encrypt your sensitive data before it is written to the Kinesis stream storage layer and decrypted after it is retrieved from storage. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Kinesis Service\n3. Select the reported Kinesis data stream for the corresponding region\n4. Under Server-side encryption, Click on Edit\n5. Choose Enabled\n6. Under KMS master key, You can choose any KMS other than the default (Default) aws/kinesis\n7. Click Save." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and (acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists or acl.grantsAsList[?any(grantee equals AuthenticatedUsers and permission is member of (ReadAcp,Read,FullControl))] exists)) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Under ''Access Control List'', Click on ''Authenticated users group'' and uncheck all items\nc. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = networkInterfaces[*].association.publicIp exists```,"AWS EC2 instance is assigned with public IP This policy identifies the AWS EC2 instance having a public IP address assigned. AWS EC2 instances with public IPs are virtual servers hosted in the Amazon Web Services (AWS) cloud that can be accessed over the internet. Public IPs increase an EC2 instance's attack surface, necessitating robust security configurations to prevent unauthorized access and attacks. It is recommended to use private IPv4 addresses for communication between EC2 instances and disassociate the public IP address from an instance or disable auto-assign public IP addresses in the subnet. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: In a default VPC, instances get a public IP address. In a non-default VPC, the subnet configuration determines this.\n\nYou can't manually change an automatically-assigned public IP. To control public IP assignment:\n\nTo unassign the IP addresses associated with a network interface, follow the instructions here: \n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#managing-network-interface-ip-addresses\n\nNote: If you specify an existing network interface for eth0 (the primary network interface), you can't change its public IP address settings using the auto-assign public IP feature; the subnet settings will take precedence.\n\nModify the subnet's public IP addressing attribute by following these actions: \n\n https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip\n\nIf you are using an Elastic IP, the instance is internet-reachable. To disassociate an Elastic IP, follow these actions: \n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-eips-associating-different." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-keys-list' AND json.rule = 'disabled is false and name contains iam.gserviceaccount.com and (_DateTime.ageInDays($.validAfterTime) > 90) and keyType equals USER_MANAGED'```,"GCP User managed service account keys are not rotated for 90 days This policy identifies user-managed service account keys which are not rotated from last 90 days or more. Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. It is recommended that all user-managed service account keys are regularly rotated. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: To fix this alert, delete the old key which is older than 90 or more days, and Create a new key for that particular service account.To Delete user-managed Service Account Key older than 90 days:\n\n1. Login to GCP Portal\n2. Go to APIs & Services (Left Panel)\n3. Select 'Credentials' and Under section 'Service Accounts', select the service account for which we need to delete the key\n4. On the page 'Service account details' select the tab 'KEYS'\n5. Click on the delete icon for the listed key after confirming the creation date is older than 90\n\nTo Create a new user-managed Service Account Key for a Service Account:\n1. Login to GCP Portal\n2. Go to APIs & Services (Left Panel)\n3. Select 'Credentials' and Under the section 'Service Accounts', select the service account for which we need a key\n4. On the page 'Service account details' select the tab 'KEYS'\n5. Under 'ADD KEY' dropdown, select 'Create new key'\n6. Select desired key type format among JSON or P12\n7. Click on CREATE button, It will download the private key. Keep it safe.\n8. Click on CLOSE if promptedIt will redirect to the APIs & Services Credentials page. Make a note of the New ID displayed in the section Service account keys with the new creation date.\n\nNOTE: Rotating the service account key might break communication for depending applications. Dependent applications need to configure manually with a new key id.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""sysdig-monitor"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance"",""sysdigTeam""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Cloud Monitoring Service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for IBM Cloud Monitoring service. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, and click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Monitoring' Service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ( ( publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist ) or ( publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false ) or ( publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false ) or ( publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist ) )AND policy.Statement[?any(Effect equals Allow and Action anyStartWith s3: and (Principal.AWS contains * or Principal equals *) and (Condition does not exist or Condition[*] is empty) )] exists```,"AWS S3 bucket policy overly permissive to any principal This policy identifies the S3 buckets that have a bucket policy overly permissive to any principal and do not have Block public and cross-account access to buckets and objects through any public bucket or access point policies enabled. It is recommended to follow the principle of least privileges ensuring that the only restricted entities have permission on S3 operations instead of any anonymous. For more details: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-bucket-user-policy-specifying-principal-intro.html This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, click on the 'Bucket Policy'\n5. Update the S3 bucket policy, by removing Principal conatining wildcard(*) to specific accounts, Services or IAM entities. Also restrict S3 action operations to specific instead of using wildcard (*).\n6. In the 'Permissions' tab, click on the 'Block public access' and enable 'Block public and cross-account access to buckets and objects through any public bucket or access point policies'." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster-nodepool' AND json.rule = lifecycleState equal ignore case ACTIVE and (nodeConfigDetails.isPvEncryptionInTransitEnabled equal ignore case ""null"" or nodeConfigDetails.isPvEncryptionInTransitEnabled does not exist)```","OCI Kubernetes Engine Cluster boot volume is not configured with in-transit data encryption This policy identifies Kubernetes Engine Clusters that are not configured with in-transit data encryption. Configuring In-transit encryption on clusters boot volumes, encrypts data in transit between the instance, the boot volume, and the block volumes. All the data moving between the instance and the block volume is transferred over an internal and highly secure network. It is recommended that Clusters boot volumes should be configured with in-transit data encryption to minimize risk for sensitive data being leaked. For more details: https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/overview.htm#BlockVolumeEncryption This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> Kubernetes Clusters (OKE)\n3. Click on the Kubernetes Clusters you wanted to modify\n4. Click on 'Node pools'\n5. Click on the reported node pool\n6. On the 'Node pool details' page, click on the 'Edit' button.\n7. On the 'Edit node pool' dialog; under 'Boot volume' section, select 'Use in-transit encryption' option\n8. Click on the 'Save Changes' button.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-global-forwarding-rule' AND json.rule = globalForwardingRules[?any( target contains ""/targetHttpProxies/"" and loadBalancingScheme contains ""EXTERNAL"" )] exists```","GCP public-facing (external) global load balancer using HTTP protocol This policy identifies GCP public-facing (external) global load balancers that are using HTTP protocol. Using the HTTP protocol with a GCP external load balancer transmits data in plaintext, making it vulnerable to eavesdropping, interception, and modification by malicious actors. This lack of encryption exposes sensitive information, increases the risk of man-in-the-middle attacks, and compromises the overall security and privacy of the data exchanged between clients and servers. It is recommended to use HTTPS protocol with external-facing load balancers. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Network Service' and then 'Load Balancing'\n3. Click on the 'FRONTENDS' tab\n4. Identify the frontend that is using the reported forwarding rule.\n5. Click on the load balancer name associated with the frontend identified above\n6. Click 'Edit'\n7. Go to 'Frontend configuration'\n8. Delete the frontend rule that allows HTTP protocol.\n9. Add new frontend rule(s) as required. Make sure to use HTTPS protocol instead of HTTP for new rules.\n10. Click 'Update'\n11. Click 'UPDATE LOAD BALANCER' in the pop-up.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_ss_update_child_policy_finding_2 Description-30540d9e-e2ce-4d22-a7df-a5b42c08f155 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = arn contains ""E2PTZRGF0OBZQJ"" and tags[*].key contains ""test""```","eai_test_policy_demo EAI Demo policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is false```,"Azure Key Vault Key has no expiration date (Non-RBAC Key vault) This policy identifies Azure Key Vault keys that do not have an expiration date for the Non-RBAC Key vaults. As a best practice, set an expiration date for each key and rotate your keys regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].name' | xargs -I {} az keyvault set-policy --name {} --certificate-permissions list listissuers --key-permissions list --secret-permissions list --spn This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Key vaults'\n3. Select the Key vault where the key is stored\n4. Select 'Keys', and select the key that you need to modify\n5. Select the current version\n6. Set the expiration date\n7. 'Save' your changes." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equals Ready and properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```,"Azure MariaDB database server not using latest TLS version This policy identifies Azure MariaDB database servers that are not using the latest TLS version for SSL enforcement. Azure Database for MariaDB uses Transport Layer Security (TLS) from communication with client applications. As a best security practice, use the newer TLS version as the minimum TLS version for the MariaDB database server. Currently, Azure MariaDB supports TLS 1.2 version which resolves the security gap from its preceding versions. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SSL connection with latest TLS version on an existing Azure Database for MariaDB, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/mariadb/howto-tls-configurations\n\nNOTE: Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-batch-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity does not exist or identity.type equal ignore case ""None""```","Azure Batch account is not configured with managed identity This policy identifies Batch accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Batch account. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Batch accounts'\n3. Click on the reported Batch account\n4. Select 'Identity' under 'Settings' from left panel \n5. Configure either 'System assigned' or 'User assigned' identity\n6. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(5432,5432) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on PostgreSQL port (5432) This policy identifies GCP Firewall rules which allow all inbound traffic on PostgreSQL port (5432). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the PostgreSQL port (5432) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'hardExpiry is false'```,"Alibaba Cloud RAM password policy configured to allow login after the password expires This policy identifies Alibaba Cloud accounts that are configured to allow login after the password has expired. As a best practice, denying login after the password expires allows you to ensure that RAM users reset their password before they can access the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Action After Password Expires' field, select 'Deny Logon' radio button\n6. Click on 'OK'\n7. Click on 'Close'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'transitEncryptionEnabled is false or transitEncryptionEnabled does not exist'```,"AWS ElastiCache Redis cluster with in-transit encryption disabled (Replication group) This policy identifies ElastiCache Redis clusters that are replication groups and have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your Redis clusters and their associated cache storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster in-transit encryption can be set, only at the time of creation of the cluster. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = ingressSecurityRules[*] size equals 0```,"OCI VCN has no inbound security list This policy identifies the OCI Virtual Cloud Networks (VCN) that lack ingress rules configured in their security lists. It is recommended that Virtual Cloud Networks (VCN) security lists are configured with ingress rules which provide stateful and stateless firewall capability to control network access to your instances. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Ingress rules\n5. Click on Add Ingress Rules (To add ingress rules appropriately in the pop up)\n6. Click on Add Ingress Rules." ```config from cloud.resource where api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = status equals available and atRestEncryptionEnabled is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '($.X.kmsKeyId does not exist) or ($.X.kmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled) and $.X.kmsKeyId equals $.Y.keyMetadata.arn'; show X;```,"AWS ElastiCache Redis cluster encryption not configured with CMK key This policy identifies ElastiCache Redis clusters that are encrypted using the default KMS key instead of Customer Managed CMK (Customer Master Key) or CMK key used for encryption is disabled. As a security best practice enabled CMK should be used instead of the default KMS key for encryption to gain the ability to rotate the key according to your own policies, delete the key, and control access to the key via KMS policies and IAM policies. For details: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html#using-customer-managed-keys-for-elasticache-security This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To encrypt your ElastiCache Redis cluster with CMK follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html#at-reset-encryption-enable-existing-cluster." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vnet-list' AND json.rule = ['properties.virtualNetworkPeerings'][*].['properties.peeringState'] equals ""Disconnected""```","Azure virtual network peer is disconnected Virtual network peering enables you to connect two Azure virtual networks so that the resources in these networks are directly connected. This policy identifies Azure virtual network peers that are disconnected. Typically, the disconnection happens when a peering configuration is deleted on one virtual network, and the other virtual network reports the peering status as disconnected. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To reconnect the virtual network peers, you need to delete the 'Disconnected' peering connection and re-configure the peering connection.\n\nTo re-configure the peering connection:\n1. Log in to the Azure Portal.\n2. Select 'Virtual Networks', and select the virtual network on which has 'Disconnected' peering.\n3. Select 'Peerings'.\n4. Delete the peering with 'Disconnected' status.\n5. Select 'Add' to re-initiate the peering configuration.\n6. Specify the 'Name' and target 'Virtual Network'.\n7. Select 'OK'\n8. Verify that peering state is 'Initiated'.\n9. Repeat step 5-7 on the target/other vnet.\n10. Verify that the peering state is 'Connected'." "```config from cloud.resource where api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals ""available"" as X; config from cloud.resource where api.name = 'aws-docdb-db-cluster-parameter-group' AND json.rule = parameters.audit_logs.ParameterValue is member of ( 'disabled','none') as Y; filter '($.X.EnabledCloudwatchLogsExports.member does not contain ""audit"") or $.X.DBClusterParameterGroup equals $.Y.DBClusterParameterGroupName' ; show X;```","AWS DocumentDB cluster does not publish audit logs to CloudWatch Logs This policy identifies the Amazon DocumentDB cluster where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. DocumentDB integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While DocumentDB provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the DocumentDB cluster to enable audit logs and publish audit logs to CloudWatch logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. Choose 'Create'. The 'Create cluster parameter group' window appears. \n4. In the 'New cluster parameter group name', enter the name of the new DB cluster parameter group. \n5. In the 'Family' list, select a 'DB parameter group family'. \n6. In the Description box, enter a description for the new DB cluster parameter group. \n7. Click 'Create'. \n\nTo modify the custom DB cluster parameter group to enable query logging, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify. \n4. Click on the 'audit_logs' parameter and click 'Edit'. \n5. Change the value of the 'audit_logs' parameter to any value (ddl,dml_read,dml_write, all) other than 'disabled' or 'none' you want to modify according to your requirements. \n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements. \n7. Choose 'Modify cluster parameter' to modify the values. \n\nTo modify an AWS DocumentDB cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster. \n4. Scroll down to 'Cluster options', select the above-created cluster parameter group from the DB parameter group dropdown. \n5. Choose 'Continue' and check the summary of modifications. \n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements. \n7. On the confirmation page, review your changes. If they are correct, choose 'Modify cluster' to save your changes. \n\nWhen the value of the audit_logs cluster parameter is enabled, ddl, dml_read, or dml_write, you must also enable Amazon DocumentDB to export logs to Amazon CloudWatch. If you omit either of these steps, audit logs will not be sent to CloudWatch. \n\nTo modify an Amazon DocumentDB cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster.\n4. Scroll down to the Log exports section, and choose 'Enable' for the 'Audit logs'.\n5. Choose 'Continue'.\n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements.\n7. Choose 'Modify cluster'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = status equals ""ISSUED"" and keyAlgorithm starts with ""RSA-"" and keyAlgorithm equals RSA-1024```","AWS Certificate Manager (ACM) RSA certificate key length less than 2048 This policy identifies the RSA certificates managed by AWS Certificate Manager with a key length of less than 2048 bits. AWS Certificate Manager (ACM) is a service for managing SSL/TLS certificates. RSA certificates are cryptographic keys used for securing communications over networks. Shorter key lengths may be susceptible to attacks such as brute force or factorization, where an attacker could potentially decrypt the encrypted data by finding the prime factors of the key. It is recommended that the RSA certificates imported on ACM utilise a minimum key length of 2048 bits or greater to ensure a sufficient level of security. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: You can't change the key length after importing a certificate. Instead, you must delete certificates with a key length smaller than 2,048 bits, and then the new RSA certificate should be imported with the desired key length.\n\nTo import the new certificate, Please refer to the below url\nhttps://docs.aws.amazon.com/acm/latest/userguide/import-certificate-api-cli.html#import-certificate-api\n\nTo delete the reported ACM RSA certificate, Please refer to the below url\n\nhttps://docs.aws.amazon.com/acm/latest/userguide/gs-acm-delete.html." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(22,22) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","PCSUP-22411 - policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90) or (access_key_2_active is true and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)'```,"AWS access keys are not rotated for 90 days This policy identifies IAM users for which access keys are not rotated for 90 days. Access keys are used to sign API requests to AWS. As a security best practice, it is recommended that all access keys are regularly rotated to make sure that in the event of key compromise, unauthorized users are not able to gain access to your AWS services. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Click on the user that was reported in the alert.\n3. Click on 'Security Credentials' and for each 'Access Key'.\n4. Follow the instructions below to rotate the Access Keys that are older than 90 days.\nhttps://aws.amazon.com/blogs/security/how-to-rotate-access-keys-for-iam-users/." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-images' AND json.rule = image.public is true and image.shared is false and image.imageOwnerAlias does not exist```,"AWS Amazon Machine Image (AMI) is publicly accessible This policy identifies AWS AMIs which are owned by the AWS account and are accessible to the public. Amazon Machine Image (AMI) provides information to launch an instance in the cloud. The AMIs may contain proprietary customer information and should be accessible only to authorized internal users. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to 'EC2' service.\n2. In the navigation pane, choose AMIs.\n3. Select your AMI from the list, and then choose Actions, Modify Image Permissions.\n4. Choose Private and choose Save.." ```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-table-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;```,"Azure Storage Logging is not Enabled for Table Service for Read Write and Delete requests This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changesecuritylistcompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createsecuritylist and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletesecuritylist and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatesecuritylist) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for security list changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for security list changes. Monitoring and alerting on changes to Security Lists will help in identifying changes to traffic flowing into and out of Subnets within a Virtual Cloud Network. It is recommended that an Event Rule and Notification be configured to catch changes made to the security list. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Network Security List – Change Compartment, Security List – Create, Security List - Delete and Security List – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = description.scheme contains internet-facing```,"AWS Classic Load Balancer is in use for internet-facing applications This policy identifies Classic Load Balancers that are being used for internet-facing HTTP/HTTPS applications. Classic Load Balancer should be used when you have an existing application running in the EC2-Classic network. Application Load Balancers (ALB) is recommended for internet-facing HTTP/HTTPS web applications. For more details: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To create Application Load Balancer (ALB) refer,\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html\n\nOnce Application Load Balancer created, you can delete the reported Classic Load Balancer by,\n1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Actions' and from the drop-down click on 'Delete'\n6. Click on the 'Yes, Delete'." ```config from cloud.resource where api.name = 'ibm-vpc' as X; config from cloud.resource where api.name = 'ibm-vpc-flow-log-collector' as Y; filter 'not($.X.id equals $.Y.target.id)'; show X;```,"IBM Cloud VPC Flow Logs not enabled This policy identifies IBM Cloud VPCs which have flow logs disabled. VPC Flow logs capture information about IP traffic going to and from network interfaces in your VPC. Flow logs are used as a security tool to monitor the traffic that is reaching your instances. Without the flow logs turned on, it is not possible to get any visibility into network traffic. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To configure a Flow log on a VPC, please follow the below URL. Please make sure to provide target as 'VPC':\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-ordering-flow-log-collector&interface=ui\n." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'resourceUsageExportConfig.enableNetworkEgressMetering does not exist or resourceUsageExportConfig.enableNetworkEgressMetering is false'```,"GCP Kubernetes Engine Clusters not configured with network traffic egress metering This policy identifies Kubernetes Engine Clusters which are not configured with network traffic egress metering. When network traffic egress metering enabled, deployed DaemonSet pod meters network egress traffic by collecting data from the conntrack table, and exports the metered metrics to the specified destination. It is recommended to use, network egress metering so that you will be having data and track over monitored network traffic. NOTE: Measuring network egress requires a network metering agent (NMA) running on each node. The NMA runs as a privileged pod, consumes some resources on the node (CPU, memory, and disk space), and enables the nf_conntrack_acct sysctl flag on the kernel (for connection tracking flow accounting). If you are comfortable with these caveats, you can enable network egress tracking for use with GKE usage metering. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable GKE usage metering:\n\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering#enabling." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' as X; count(X) less than 1```,"Azure Monitoring log profile is not configured to export activity logs This policy identifies the Azure accounts in which at least one monitoring log profile is not configured. A Log Profile controls how your Activity Log is exported; using which you could export the logs and store them for a longer duration for analyzing security activities within your Azure account. So it is recommended to have at least one monitoring log profile in an account to export all activity logs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a new log profile (Export to a storage account) use following command:\naz monitor log-profiles create --name --location --locations --categories ""Delete"" ""Write"" ""Action"" --enabled true --days --storage-account-id ""/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/""\n\nOR\n\nTo create a new log profile (Export to an event hub) use following command:\naz monitor log-profiles create --name --location --locations --categories ""Delete"" ""Write"" ""Action"" --enabled true --days --service-bus-rule-id ""/subscriptions//resourceGroups//providers/Microsoft.EventHub/namespaces//authorizationrules/RootManageSharedAccessKey""\n\nNOTE: Make sure before referring Storage Account or Eventhub in above CLI commands, you have already created Storage Account or Eventhub as per your requirements.." "```config from cloud.resource where api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.kind equals ""compute#metadata"" and commonInstanceMetadata.items[?any(key contains ""enable-oslogin"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and commonInstanceMetadata.items[?any(key contains ""ssh-keys"")] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and ( metadata.items[?any(key exists and key contains ""block-project-ssh-keys"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and metadata.items[?any(key exists and key contains ""enable-oslogin"" and (value contains ""Yes"" or value contains ""Y"" or value contains ""True"" or value contains ""true"" or value contains ""TRUE"" or value contains ""1""))] does not exist and name does not start with ""gke-"") as Y; filter '$.Y.zone contains $.X.name'; show Y;```","HD-GCP VM instances have block project-wide SSH keys feature disabled This policy identifies VM instances which have block project-wide SSH keys feature disabled. Project-wide SSH keys are stored in Compute/Project-metadata. Project-wide SSH keys can be used to login into all the instances within a project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within a project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit button\n6. Under SSH Keys section, Check 'Block project-wide SSH keys' on the checkbox\n7. Click on Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-security-groups' AND json.rule = '((groupName == default) and (ipPermissions[*] is not empty or ipPermissionsEgress[*] is not empty))'```,"AWS Default Security Group does not restrict all traffic This policy identifies the default security groups which does not restrict inbound and outbound traffic. A VPC comes with a default security group whose initial configuration denies all inbound traffic and allow all outbound traffic. If you do not specify a security group when you launch an instance, the instance is automatically assigned to this default security group. As a result, the instance may accidentally send outbound traffic. It is recommended that to remove any inbound and outbound rules in the default security group and not to attach the default security group to any resources. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services.\n\nFor Resources associated with the alerted security group:\n1. Identify AWS resources that exist within the default security group\n2. Create a set of least privilege security groups for those resources\n3. Place the resources in those security groups\n4. Remove the associated resources from the default security group\n\nFor alerted Security Groups:\n1. Log in to the AWS console\n2. In the console, select the specific region from the 'Region' drop-down on the top right corner, for which the alert is generated\n3. Navigate to the 'VPC' service\n4. For each region, Click on 'Security Groups' specific to the alert\n5. On section 'Inbound rules', Click on 'Edit Inbound Rules' and remove the existing rule, click on 'Save'\n6. On section 'Outbound rules', Click on 'Edit Outbound Rules' and remove the existing rule, click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = publiclyAccessible is true and masterUsername is member of (""awsuser"",""administrator"",""admin"")```","AWS Redshift cluster with commonly used master username and public access setting enabled This policy identifies AWS Redshift clusters configured with commonly used master usernames like 'awsuser', 'administrator', or 'admin', and the public access setting is enabled. AWS Redshift, a managed data warehousing service typically stores sensitive and critical data. Allowing public access increases the risk of unauthorized access, data breaches, and potential malicious activities. Using standard usernames increases the risk of password brute-force attacks by potential intruders. As a recommended security measure, it is advised not to use commonly used usernames and to disable public access for the Redshift cluster. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Changing the default master user name for your existing Amazon Redshift clusters requires relaunching those clusters with a different master user name and migrating the existing data to the new clusters.\n\nTo launch the new Redshift database clusters,\n1. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.\n2. On the navigation menu, choose 'Clusters'. The clusters for your account in the current AWS Region are listed. A subset of the properties of each cluster is displayed in columns in the list.\n3. Choose 'Create cluster' to create a cluster.\n4. Follow the instructions on the console page to enter the properties for Cluster configuration.\n5. In the 'Database configuration' section, type a unique (non-default) user name within the 'Master user name' field.\n6. In the 'Additional configurations', Under the 'Network and security' Dropdown, Ensure the checkbox 'Turn on Publicly accessible' in the 'Publicly accessible' section is unchecked.\n7. Fill out the rest of the fields available on this page with the information taken from the existing cluster.\n8. Choose 'Create cluster' to create the cluster. The cluster might take several minutes to be ready to use.\n9. Once the Cluster Status value changes to available and the DB Health status changes to healthy, the new cluster can used to load the existing data from the old cluster.\n10. Once the data migration process is completed and all the data is loaded into the new Redshift cluster and all applications configured to use the new cluster, delete the old cluster.\n\nTo delete the existing cluster, refer to the below link.\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#delete-cluster." "```config from cloud.resource where api.name = 'azure-machine-learning-workspace' AND json.rule = properties.keyVault exists as X; config from cloud.resource where api.name = 'azure-key-vault-list' AND json.rule = ""not (diagnosticSettings.value[*].properties.logs[*].enabled any equal true and diagnosticSettings.value[*].properties.logs[*].enabled size greater than 0)"" as Y; filter '$.X.properties.keyVault contains $.Y.name'; show Y;```","Azure Key vault used for machine learning workspace secrets storage is not enabled with audit logging This policy identifies Azure Key vaults used for machine learning workspace secrets storage that are not enabled with audit logging. Azure Key vaults are used to store machine learning workspace secrets and other sensitive information that is needed by the workspace. Enabling key vaults with audit logging will help in monitoring how and when machine learning workspace secrets are accessed, and by whom. This audit log data enhances visibility by providing valuable insights into the trail of interactions involving confidential information. As a best practice, it is recommended to enable audit event logging for key vaults used for machine learning workspace secrets storage. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Select 'Key vaults'\n3. Select the key vault instance to modify\n4. Select 'Diagnostic settings' under 'Monitoring'\n5. Click on '+Add diagnostic setting'\n6. In the 'Diagnostic setting' page, Select the Logs, Metrics and Destination details as per your business requirements.\n7. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-external-backend-service' AND json.rule = backends exists and ( protocol equal ignore case ""HTTP"" or protocol equal ignore case ""HTTPS"" or protocol equal ignore case ""HTTP2"" ) and ( logConfig.enable does not exist or logConfig.enable is false )```","GCP External Load Balancer logging is disabled This policy identifies GCP External Load Balancers using any of the protocols like HTTP, HTTPS, and HTTP/2 having logging disabled. GCP external load balancers distribute incoming traffic across multiple instances or services hosted on Google Cloud Platform. Feature \""logging\"" for external load balancers captures and records detailed information about the traffic flowing through the load balancers. This includes data such as incoming requests, responses, errors, latency metrics, and other relevant information. By enabling logging for external load balancers, you gain visibility into the performance, health, and security of the applications. Logged data comes handy for troubleshooting an incident, monitoring, analysis, and compliance purposes. It is recommended to enable logging for all external load balancers. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console.\n2. Navigate to 'Network Services' and select 'Load Balancing' from the left panel.\n3. Click on 'BACKENDS'.\n4. Click on the load balancer link under the 'Load balancer' column for the reported backend service.\n5. On the Load Balancer details page, click on 'EDIT'.\n6. Click on 'Backend configuration', and then click the edit icon next to the reported backend service under the 'Backend services' section.\n7. Under 'Logging', select 'Enable logging' checkbox.\n8. Choose the appropriate Sample rate.\n9. To finish editing the backend service, click 'UPDATE'.\n10. To finish editing the load balancer, click 'UPDATE'.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.managedNetwork.isolationMode equal ignore case Disabled OR properties.managedNetwork.isolationMode does not exist)```,"Azure Machine Learning workspace not enforced with Managed Virtual Network Isolation This policy identifies Azure Machine Learning workspaces that are not enforced with Managed Virtual Network Isolation. Managed Virtual Network Isolation ensures that the workspace and its resources are accessible only within a secure virtual network. Without enforcing this isolation, the environment becomes vulnerable to security risks like external threats, data leaks, and non-compliance. If not properly isolated, the workspace may be exposed to public networks, increasing the chances of unauthorized access and data breaches. As a security best practice, it is recommended to configure Azure Machine Learning workspaces with Managed Virtual Network Isolation. This will restrict network access to the workspace and ensure that it can only be accessed from authorized networks. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To update an existing Azure Machine Learning workspace to use a managed virtual network, you first need to delete all its compute resources, including compute instances, compute clusters, and managed online endpoints.\n\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the reported Azure Machine Learning Workspace\n4. Under 'Settings' go to 'Networking' section\n5. At the top, select the 'Workspace managed outbound access' tab\n6. Choose either 'Allow Internet Outbound' or 'Allow Only Approved Outbound' based on your needs\n7. Configure the workspace outbound rules according to your requirements\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"Copy of build information This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals ""0.0.0.0/0"" and direction equals ""inbound"" and ( protocol equals ""all"" or ( protocol equals ""tcp"" and ( port_max greater than 3389 and port_min less than 3389 ) or ( port_max equals 3389 and port_min equals 3389 ))))] exists```","IBM Cloud Security Group allow all traffic on RDP port (3389) This policy identifies IBM Cloud Security groups that allow all traffic on RDP port 3389. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 3389 (or range containing 3389)\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].['osDisk'].['vhd'].['uri'] exists ```,"Azure Virtual Machines are not utilising Managed Disks This policy identifies Azure Virtual Machines which are not utilising Managed Disks. Using Azure Managed disk over traditional BLOB based VHD's has more advantage features like Managed disks are by default encrypted, reduces cost over storage accounts and more resilient as Microsoft will manage the disk storage and move around if underlying hardware goes faulty. It is recommended to move BLOB based VHD's to Managed Disks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'Virtual Machines' from the left pane\n3. Select the reported virtual machine\n4. Select 'Disks' under 'Settings'\n5. Click on 'Migrate to managed disks'\n6. Select 'Migrate'." "```config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' AND json.rule = status does not equal ""Terminated"" as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-configuration-settings' AND json.rule = configurationSettings[*].optionSettings[?any( optionName equals ""ManagedActionsEnabled"" and namespace equals ""aws:elasticbeanstalk:managedactions"" and value equals ""false"")] exists as Y; filter ' $.X.environmentName equals $.Y.configurationSettings[*].environmentName and $.X.applicationName equals $.Y.configurationSettings[*].applicationName'; show X;```","AWS Elastic Beanstalk environment managed platform updates are not enabled This policy identifies the AWS Elastic Beanstalk Environment where managed platform updates are not enabled. Elastic Beanstalk is a platform as a service (PaaS) product from Amazon Web Services (AWS) that provides automated application deployment and scaling features. Enabling managed platform updates ensures that the latest available platform fixes, updates, and features for the environment are installed. Users must not apply updates manually without automatic updates, risking missed critical updates and potential security vulnerabilities. This can result in high-severity security risks, loss of data, and possible system downtime. It is recommended to ensure platform updates are managed automatically is crucial for the overall security and performance of the applications running on the platform. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure managed platform updates for Elastic Beanstalk environment, perform the following actions\n\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'Elastic Beanstalk' service\n4. In the navigation pane, choose 'Environments', then select the reported environment's name from the list\n5. In the navigation pane, choose Configuration\n6. In the 'Updates, monitoring, and logging' configuration category, choose Edit\n7. Under 'Managed platform updates' section, Enable Managed updates by selecting the 'Activated' checkbox\n8. If managed updates are enabled, select a maintenance window, and then select an 'Update level' according to your business requirements\n9. To save the changes choose 'Apply' at the bottom of the page." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = isMasterVersionSupported exists AND isMasterVersionSupported does not equal ""true""```","GCP GKE unsupported Master node version This policy identifies the GKE master node version and generates an alert if the version running is unsupported. Using an unsupported version of Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) can lead to several potential issues and risks, such as security vulnerabilities, compatibility issues, performance and stability problems, and compliance concerns. To mitigate these risks, it's crucial to regularly update the GKE clusters to supported versions recommended by Google Cloud. As a security best practice, it is always recommended to use the latest version of GKE. Note: This Policy is in line with the GCP GKE release version schedule https://cloud.google.com/kubernetes-engine/docs/release-schedule#schedule-for-release-channels This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Manually initiate a master upgrade:\n\n1. Visit the Google Kubernetes Engine Clusters menu in Google Cloud Platform Console.\n2. Click the desired cluster name.\n3. Under Cluster basics, click ""Upgrade Available"" next to Version.\n4. Select the desired version, then click Save Changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and subnetId does not exist```,"AWS SageMaker notebook instance is not placed in VPC This policy identifies SageMaker notebook instances that are not placed inside a VPC. It is recommended to place your SageMaker inside VPC so that VPC-only resources able to access your SageMaker data, which cannot be accessed outside a VPC network. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/process-vpc.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance can be not placed in VPC once it is created. You need to create a new notebook instance placing it in VPC; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Network' section,\nFrom the 'VPC – optional' dropdown list, select the VPC where you want to deploy a new SageMaker notebook instance.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete dialog box, click the Delete button to confirm the action.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = iamInstanceProfile.arn does not exist and state.code equals 16```,"AWS EC2 Instance IAM Role not enabled AWS provides Identity Access Management (IAM) roles to securely access AWS services and resources. The role is an identity with permission policies that define what the identity can and cannot do in AWS. As a best practice, create IAM roles and attach the role to manage EC2 instance permissions securely instead of distributing or sharing keys or passwords. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The most common setup is the AWS default that allows for EC2 access to AWS Services. For most, this is a great way to realize flexible, yet secure, EC2 access enabled for your instances. Select this when you launch EC2 instances to automatically inherit these permissions.\n\nIAM\n1. Go to the AWS console IAM dashboard.\n2. In the navigation pane, choose Roles, Create new role.\n3. Under 'Choose the service that will use this role' select EC2, then 'Next:Permissions.'\n4. On the Attach permissions policies page, select an AWS managed policy that grants your instance access to the resources that they need, then 'Next:Tags.'\n5. Add tags (optional), the select 'Next:Review.'\n6. On the Create role and Review page, type a name for the role and choose Create role.\n\nEC2\n1. Go to the AWS console EC2 dashboard.\n2. Select Running Instances.\n3. Check the instance you want to modify.\n4. From the Actions pull down menu, select Instance Settings and Attach/Replace IAM Role.\n5. On the Attach/Replace IAM Role page, under the IAM role pull down menu, choose the role created in the IAM steps above.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains CreateNetworkAcl and $.X.filterPattern contains CreateNetworkAclEntry and $.X.filterPattern contains DeleteNetworkAcl and $.X.filterPattern contains DeleteNetworkAclEntry and $.X.filterPattern contains ReplaceNetworkAclEntry and $.X.filterPattern contains ReplaceNetworkAclAssociation) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for Network Access Control Lists (NACL) changes This policy identifies the AWS regions which do not have a log metric filter and alarm for Network Access Control Lists (NACL) changes. Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. It is recommended that a metric filter and alarm be established for changes made to NACLs. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and settings.databaseFlags[?any(name contains log_hostname and value contains on)] exists""```","GCP PostgreSQL instance database flag log_hostname is not set to off This policy identifies PostgreSQL database instances in which database flag log_hostname is not set to off. Logging hostnames can incur overhead on server performance as for each statement logged, DNS resolution will be required to convert IP address to hostname. It is recommended to set log_hostname as off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_hostname' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_hostname' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/publicIPAddresses/delete"" as X; count(X) less than 1```","Azure Activity log alert for Delete public IP address rule does not exist This policy identifies the Azure accounts in which activity log alert for Delete public IP address rule does not exist. Creating an activity log alert for Delete public IP address rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. By enabling this monitoring, you get alerts whenever any deletions are made to public IP addresses rules. As a best practice, it is recommended to have an activity log alert for Delete public IP address rule to enhance network security monitoring and detect suspicious activities. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Public Ip Address (Public Ip Address)' and Other fields you can set based on your custom settings.\n6. Click on Create." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[?any(Effect equals Deny and Action equals s3:* and (Principal.AWS equals * or Principal equals *) and Condition.Bool.aws:SecureTransport contains false )] does not exist```,"AWS S3 bucket policy does not enforce HTTPS request only This policy identifies AWS S3 bucket having a policy that does not enforce only HTTPS requests. Enforcing the S3 bucket to accept only HTTPS requests would prevent potential attackers from eavesdropping on data in-transit or manipulating network traffic using man-in-the-middle or similar attacks. It is highly recommended to explicitly deny access to HTTP requests in S3 bucket policy. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, Click on 'Edit' under 'Bucket policy'\n5. To update S3 bucket policy to enforce HTTPS request only, follow the below URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case ""/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace""```","bboiko test 04 - policy This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'webACLId is empty'```,"AWS CloudFront web distribution with AWS Web Application Firewall (AWS WAF) service disabled This policy identifies Amazon CloudFront web distributions which have the AWS Web Application Firewall (AWS WAF) service disabled. As a best practice, enable the AWS WAF service on CloudFront web distributions to protect against application layer attacks. To block malicious requests to your Cloudfront Content Delivery Network, define the block criteria in the WAF web access control list (web ACL). This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button\n5. On 'Edit Distribution' page, Choose a 'AWS WAF Web ACL' from dropdown.\n6. Click on 'Yes, Edit'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-lb-list' AND json.rule = diagnosticSettings.value[*] size equals 0```,"Azure Load Balancer diagnostics logs are disabled Azure Load Balancers provide different types of logs related to alert events, health probe and metrics to help you manage and troubleshoot issues. This policy identifies Azure Load Balancers that have diagnostics logs disabled. As a best practice, enable diagnostic logs to start collecting the data available through these logs. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Diagnostic logs is not supported for Azure Load Balancer which are in Basic SKU\nPlease create new Load Balancer and selecting Standard SKU\nOR\nTo upgrade Basic SKU Load Balancer to Standard SKU follow the steps provided in the link below,\nhttps://docs.microsoft.com/en-us/azure/load-balancer/upgrade-basic-standard\n\nFor Azure Load Balancer Standard SKU follow below steps,\n1. Log in to the Azure portal.\n2. Navigate to 'Load Balancers', and select the reported load balancer from the list\n3. Select 'Diagnostic settings' under 'Monitoring' section\n4. Click on '+Add diagnostic setting'\n5. Specify a 'Diagnostic settings name',\n6. Under 'Category details' section, select the type of 'Log' that you want to enable\n7. Under section 'Destination details',\na. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\nb. If you set 'Archive to storage account', select the 'Subscription', 'Storage account' and set the 'Retention (days)'\nc. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(110,110) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on POP3 port (110) This policy identifies GCP Firewall rules which allow all inbound traffic on POP3 port (110). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the POP3 port (110) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = advancedSecurityOptions.enabled is false and advancedSecurityOptions.internalUserDatabaseEnabled is false```,"AWS OpenSearch Fine-grained access control is disabled This policy identifies AWS OpenSearch which has Fine-grained access control disabled. Fine-grained access control offers additional ways of controlling access to your data on AWS OpenSearch Service. It is highly recommended enabling fine-grained access control to protect the data on your domain. For more information, please follow the URL given below, https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer the following URL for configuring Fine-grained access control on your AWS OpenSearch:\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html#fgac-forget\n\nNotes: \n1. You can't enable fine-grained access control on existing domains, only new ones. After you enable fine-grained access control, you can't disable it.\n2. Fine-grained access control is supported only from ElasticSearch 6.7 or later. To upgrade older versions of AWS OpenSearch please refer to the URL given below,\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/version-migration.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.privateEndpointConnections[*] is empty```,"Azure Cognitive Services account not configured with private endpoint This policy identifies Azure Cognitive Services accounts that are not configured with private endpoint. Private endpoints in Azure AI service resources allow clients on a virtual network to securely access data over Azure Private Link. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Cognitive Services account. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure AI services'\n3. Click on the reported Azure AI service\n4. Configure Private endpoint connections under 'Networking' from left panel." ```config from cloud.resource where api.name = 'aws-apigateway-get-stages' AND json.rule = methodSettings.[*].loggingLevel does not exist or methodSettings.[*].loggingLevel equal ignore case off as X; config from cloud.resource where api.name = 'aws-apigateway-get-rest-apis' as Y; filter ' $.X.restApi equal ignore case $.Y.id '; show Y;```,"AWS API Gateway REST API execution logging disabled This policy identifies AWS API Gateway REST API's that have disabled execution logging in their stages. AWS API Gateway REST API is a service for creating and managing RESTful APIs integrated with backend services like Lambda and HTTP endpoints. Execution logs log all the API activity logs to CloudWatch, which helps in incident response, security and compliance, troubleshooting, and monitoring. It is recommended to enable logging on the API Gateway REST API to track API activity. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable execution logging on API Gateway Rest API, follow the below steps:\n\n1. Sign in to the AWS console. Navigate to the API Gateway dashboard\n2. Under the navigation page, select the 'APIs'\n3. Select the REST API reported; under the navigation page, select 'Stages'\n4. Select a stage and click on 'Edit' under the 'Logs and tracing' section\n5. Under the 'Edit logs and tracing' page, select a value other than 'Off' under the 'CloudWatch logs' dropdown.\n6. Click on 'Save'.\n7. Repeat this process for all the stages of the reported REST API.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = ""state.code contains active and ['attributes'].['access_logs.s3.enabled'] contains false""```","AWS Elastic Load Balancer v2 (ELBv2) with access log disabled This policy identifies Elastic Load Balancers v2 (ELBv2) which have access log disabled. Access logs capture detailed information about requests sent to your load balancer and each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. Click on 'Actions' drop-down\n7. Click on 'Edit attributes'\n8. In the 'Edit load balancer attributes' popup box, Choose 'Enable' for 'Access logs' and configure S3 location where you want to store ELB logs.\n9. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-credential-report' AND json.rule='user does not equal """" and password_enabled equals true and mfa_active is false'```","AWS MFA not enabled for IAM users This policy identifies AWS IAM users for whom MFA is not enabled. AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. Multiple factors provide increased security for your AWS account settings and resources. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS and navigate to the 'IAM' service.\n2. Navigate to the user that was reported in the alert.\n3. Under 'Security Credentials', check ""Assigned MFA Device"" and follow the instructions to enable MFA for the user.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(1521,1521) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on Oracle DB port (1521) This policy identifies GCP Firewall rules which allow all inbound traffic on DB port (1521). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the DB port (1521) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'legacyAbac.enabled equals true'```,"GCP Kubernetes Engine Clusters have Legacy Authorization enabled This policy identifies GCP Kubernetes Engine Clusters which have enabled legacy authorizer. The legacy authorizer in Kubernetes Engine grants broad and statically defined permissions to all cluster users. After legacy authorizer setting is disabled, RBAC can limit permissions for authorized users based on need. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under 'Security', click on edit button (Pencil Icon) for Legacy authorization\n6. Uncheck 'Enable legacy authorization' checkbox\n7. Click on Save Changes." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"AWS CloudFront distributions does not have a default root object configured This policy identifies list of CloudFront distributions which does not have default root object configured. If a CloudFront distribution does not have a default root object configured, requests for the root of your distribution pass to your origin server which might return a list of the private contents of your origin. To avoid exposing the contents of your distribution or returning an error it is recommended to specify a default root object. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure a default root object for your distribution follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html#DefaultRootObjectHowToDefine." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.isEnabled is false```,"Azure SQL Server ADS Vulnerability Assessment Periodic recurring scans is disabled This policy identifies Azure SQL Server which has ADS Vulnerability Assessment Periodic recurring scans disabled. Advanced Data Security - Vulnerability Assessment 'Periodic recurring scans' schedules periodic vulnerability scanning for the SQL server and Databases. It is recommended to enable ADS - VA Periodic recurring scans which provides risk visibility based on updated known vulnerability signatures and best practices. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. Set 'Periodic recurring scans' to 'ON' under 'VULNERABILITY ASSESSMENT SETTINGS'\n6. 'Save' your changes." ```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-global-web-acl-resource' AND json.rule =(webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webACLId'; show X;```,"cloneAWS CloudFront attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS CloudFront attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, CloudFront attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. Note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = 'backupRetentionPeriod equals 0 or backupRetentionPeriod does not exist'```,"AWS RDS instance without Automatic Backup setting This policy identifies RDS instances which are not set with the Automatic Backup setting. If Automatic Backup is set, RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases which provide for point-in-time recovery. The automatic backup will happen during the specified backup window time and keeps the backups for a limited period of time as defined in the retention period. It is recommended to set Automatic backups for your critical RDS servers that will help in the data restoration process. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose Instances, and then select the reported DB instance\n5. On 'Instance Actions' drop-down list, choose 'Modify'\n6. In 'Backup' section,\na. From the 'Backup Retention Period' drop-down list, select the number of days you want RDS should retain automatic backups of this DB instance\nb. Choose 'Start Time' and 'Duration' in 'Backup window' which is the daily time range (in UTC) during which automated backups created\n7. Click on 'Continue'\n8. On the confirmation page, choose 'Modify DB Instance' to save your changes." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy wvpvq This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and config.http20Enabled equals false'```,"Azure App Service Web app doesn't use HTTP 2.0 HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritization of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Platform settings', Set 'HTTP version' to '2.0'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = '(status.recording is true and status.lastStatus equals SUCCESS) and (recordingGroup.allSupported is false or recordingGroup.includeGlobalResourceTypes is false)'```,"AWS Config must record all possible resources This policy identifies resources for which AWS Config recording is enabled but recording for all possible resources are disabled. AWS Config provides an inventory of your AWS resources and a history of configuration changes to these resources. You can use AWS Config to define rules that evaluate these configurations for compliance. Hence, it is important to enable this feature. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS and navigate to the 'Config' service\n2. Change to the respective region and in the navigation pane, click on 'Settings'\n3. Review the 'All resources' and Check the 2 options (3.a and 3.b)\n3.a Record all resources supported in this region\n3.b Include global resources (e.g., AWS IAM resources)." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = agentConfig.isMonitoringDisabled is true```,"OCI Compute Instance has monitoring disabled This policy identifies the OCI Compute Instances that are configured with Monitoring disabled. It is recommended that Compute Instances should be configured with monitoring is enabled following security best practices. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Metrics.\n5. Click Enable monitoring. (If monitoring is not enabled (and the instance uses a supported image), then a button is available to enable monitoring.)\n\nFMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/enablingmonitoring.htm#ExistingEnabling." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-activity-tracker-route' AND json.rule = rules[?any( (locations[*] equal ignore case ""global"") or (locations[*] equals ""*"") )] exists as X; count(X) less than 1```","IBM Cloud Activity Tracker Event Routing is not configured to collect global events This policy identifies IBM Cloud Accounts which does not have at-least one Activity tracker event route defined to collect global event's data. Activity tracker event route configured with global events collects all the global services' event data and will be sent to the target configured, which can be used for access pattern analysis from security perspective. It is recommended to define at-least one route with location set to global. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To configure an Activity tracker route to collect global events, please follow the below URL. Please make sure to provide location value either as 'global' or '*' to make the route collect global service events.:\n\nhttps://cloud.ibm.com/docs/atracker?topic=atracker-route_v2&interface=cli#route-create-cli." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'```,"cloned copy - RLP-93423 - 1 Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-snapshots' AND json.rule = 'snapshot.status equals available and snapshot.encrypted is false'```,"AWS RDS DB snapshot is not encrypted This policy identifies AWS RDS DB (Relational Database Service Database) cluster snapshots which are not encrypted. It is highly recommended to implement encryption at rest when you are working with production data that have sensitive information, to protect from unauthorized access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: You can encrypt a copy of an unencrypted snapshot. This way, you can quickly add encryption to a previously unencrypted DB instance.\nFollow below steps to encrypt a copy of an unencrypted snapshot:\n1. Log in to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'RDS' dashboard from 'Services' dropdown.\n4. Click on 'Snapshot' from left menu.\n5. Select the alerted snapshot\n6. From 'Action' dropdown, select 'Copy Snapshot'\n7. In 'Settings' section, from 'Destination Region' select a region,\n8. Provide an identifier for the new snapshot in field 'New DB Snapshot Identifier'\n9.In 'Encryption' section, select 'Enable Encryption'\n10. Select a master key for encryption from the dropdown 'Master key'.\n11. Click on 'Copy Snapshot'.\n\nThe source snapshot needs to be removed once the copy is available.\nNote: If you delete a source snapshot before the target snapshot becomes available, the snapshot copy may fail. Verify that the target snapshot has a status of AVAILABLE before you delete a source snapshot.." "```config from cloud.resource where api.name = 'aws-vpc-transit-gateway' AND json.rule = isShared is false and options.autoAcceptSharedAttachments exists and options.autoAcceptSharedAttachments equal ignore case ""enable""```","AWS Transit Gateway auto accept vpc attachment is enabled This policy identifies if Transit Gateways are automatically accepting shared VPC attachments. When this feature is enabled, the Transit Gateway automatically accepts any VPC attachment requests from other AWS accounts without requiring explicit authorization or verification. This can be a security risk, as it may allow unauthorized VPC attachments to connect to the Transit Gateway. As per the best practices for authorization and authentication, it is recommended to turn off the AutoAcceptSharedAttachments feature. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify a transit gateway Auto accept shared attachments:\n\n 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.\n 2. On the navigation pane, choose Transit Gateways.\n 3. Choose the transit gateway to modify.\n 4. Under the ‘Actions' dropdown, choose the ‘Modify transit gateway’ option.\n 5. On the 'Modify transit gateway' page, uncheck the 'Auto accept shared attachments' checkbox under the 'Configure cross-account sharing options' section.\n 6. Click 'Modify transit gateway' to update the changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'domainValidationOptions[*].domainName contains *'```,"AWS ACM Certificate with wildcard domain name This policy identifies ACM Certificates which are using wildcard certificates for wildcard domain name instead of single domain name certificates. ACM allows you to use an asterisk (*) in the domain name to create an ACM Certificate containing a wildcard name that can protect several sites in the same domain. For example, a wildcard certificate issued for *.prismacloud.io can match both www.prismacloud.io and images.prismacloud.io. When you use wildcard certificates, if the private key of a certificate is compromised, then all domain and subdomains that use the compromised certificate are potentially impacted. So it is recommended to use single domain name certificates instead of wildcard certificates to reduce the associated risks with a compromised domain or subdomain. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: To resolve this alert, you have to replace the reported wildcard certificate with single domain name certificate for all the first-level subdomains resulted from the domain name of the website secured with the wildcard certificate and delete the reported wildcard domain certificate.\n\nTo create a new certificate with a single domain:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Certificate Manager\n4. In 'Request a certificate' page,\na. On Step 1: 'Add domain names' page, in the 'Domain name' box, type the fully qualified domain name. Click on 'Next'\nb. On Step 2: 'Select validation method' page, Select the validation method. Click on 'Review'\nc. On Step 3: 'Review' page, review the domain name and validation method details. click on 'Confirm'\nd. On Step 4: 'Validation' page, validate the certificate request based on the validation method selected. then click on 'Continue'\nThe certificate status should change from 'Pending validation' to 'Issued'. Now access your application's web server configuration and replace the wildcard certificate with the newly issued single domain name certificate.\n\nTo delete wildcard certificate:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'\n6. On 'Delete certificate' popup windows, Click on 'Delete' button." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals ""RUNNING"" and nodePools[?any(config.bootDiskKmsKey does not exist)] exists```","GCP GKE cluster node boot disk not encrypted with CMEK This policy identifies GCP GKE clusters that do not have their node boot disk encrypted with CMEK. The GKE node boot disk is the persistent disk that houses the Kubernetes node file system. By default, this disk is encrypted by a GCP managed key but users can specify customer managed encryption key to get enhanced security, control over the encryption key, and also comply with any regulatory requirements. It is recommended to use CMEK to encrypt the boot disk of GKE cluster nodes as it gives you full control over the encrypted data. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: The KMS key used for node boot disk encryption for existing GKE clusters/cluster nodes cannot be changed. \n\nFor standard clusters:\nEither create a new standard cluster with node boot disk encryption using CMEK or add new node pools with disk encryption using CMEK to an existing standard cluster while removing older node pools which do not have node boot disk CMEK configured. To encrypt GKE standard cluster node boot disks using CMEK, please refer to the URLs given below:\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#boot-disks\n\nFor autopilot clusters:\nAutopilot cluster node boot disk encryption cannot be updated for existing autopilot clusters. To create a new autopilot cluster with CMEK protected node boot disk, please refer to the URLs given below:\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#create_a_cluster_with_a_cmek-protected_node_boot_disk." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = 'isLegacy is true and (properties.categories[*] does not contain Write or properties.categories[*] does not contain Delete or properties.categories[*] does not contain Action)'```,"Azure Monitor log profile does not capture all activities This policy identifies the Monitor log profiles which are not configured to capture all activities. A log profile controls how the activity log is exported. Configuring the log profile to collect logs for the categories 'Write', 'Delete' and 'Action' ensures that all the control/management plane activities performed on the subscription are exported. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: On the Azure Portal, there is no provision to check or set categories. However, when a log profile is created using the Azure Portal, Write, Delete and Action categories are set by default.\n\nLog profile activities can be set only through CLI using REST API and CLI is:\n1. To list the Log profile run,\naz monitor log-profiles list\n\n2. Note the name of reported log profile and replace it with in below command:\naz account get-access-token --query ""{subscription:subscription,accessToken:accessToken}"" --out tsv | xargs -L1 bash -c 'curl -X GET -H ""Authorization: Bearer $1"" -H ""Content-Type: application/json"" https://management.azure.com/subscriptions/$0/providers/microsoft.insights/logprofiles/?api-version=2016-03-01' | jq\nCopy the JSON output and save it as 'input.json' file.\n\n3. Edit the saved 'input.json' file to add all activities 'Write', 'Delete' and 'Action' in categories JSON array section.\n\n4. Run below command taking 'input.json' as input file,\naz account get-access-token --query ""{subscription:subscription,accessToken:accessToken}"" --out tsv | xargs -L1 bash -c 'curl -X PUT -H ""Authorization: Bearer $1"" -H ""Content-Type: application/json"" https://management.azure.com/subscriptions/$0/providers/microsoft.insights/logprofiles/?api-version=2016-03-01 -d@""input.json""'\n\nNOTE: To run all above CLIs you have to be configured with Azure subscription and accessToken locally. And these CLI commands require 'microsoft.insights/logprofiles/*' permission.." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and Action contains sts:AssumeRole and Resource anyStartWith * and Condition does not exist)] exists and policyArn does not contain iam::aws```,"AWS IAM policy allows assume role permission across all services This policy identifies AWS IAM policy which allows assume role permission across all services. Typically, AssumeRole is used if you have multiple accounts and need to access resources from each account then you can create long term credentials in one account and then use temporary security credentials to access all the other accounts by assuming roles in those accounts. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service.\n3. Identify the reported policy\n4. Change the Service element of the policy document to be more restrictive so that it only allows AssumeRole permission on select services.." "```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-rds-describe-db-snapshots' AND json.rule = ""attributes[?(@.attributeName=='restore')].attributeValues[*] size != 0 and _AWSCloudAccount.isRedLockMonitored(attributes[?(@.attributeName=='restore')].attributeValues) is false""```","AWS RDS Snapshot with access for unmonitored cloud accounts This policy identifies RDS snapshots with access for unmonitored cloud accounts. The RDS Snapshot which have either the read / write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read / write privileges should be reviewed and confirmed that these are valid accounts of your organisation (or authorised by your organisation) and are not active under Prisma Cloud monitoring. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the RDS service.\n4. Select the identified 'RDS Snapshot' under the 'Snapshots' in the left hand menu.\n5. Under the tab 'Snapshot Actions', selection the option 'Share Snapshot'.\n6. Review and delete the AWS Accounts which should not have read access.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = 'osType does not exist and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of (""EncryptionAtRestWithCustomerKey"", ""EncryptionAtRestWithPlatformAndCustomerKeys"")'```","Azure VM data disk is encrypted with the default encryption key instead of ADE/CMK This policy identifies the data disks which are encrypted with the default encryption key instead of ADE/CMK. Azure encrypts data disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable SSE with Azure Disk Encryption [SSE with PMK+ADE],\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal." "```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equal ignore case ""Microsoft.Keyvault"" as X; config from cloud.resource where api.name = 'azure-key-vault-list' and json.rule = properties.enableRbacAuthorization is false and properties.accessPolicies[*].permissions exists and (properties.accessPolicies[*].permissions.keys[*] intersects ('Decrypt', 'Encrypt', 'Release', 'Purge', 'all') or properties.accessPolicies[*].permissions.secrets[*] intersects ('Purge', 'all') or properties.accessPolicies[*].permissions.certificates[*] intersects ('Purge', 'all')) as Y; filter '$.Y.properties.vaultUri contains $.X.properties.encryption.keyvaultproperties.keyvaulturi'; show X;```","Azure Storage account encryption key configured by access policy with privileged operations This policy identifies Azure Storage accounts which are encrypted by an encryption key configured access policy with privileged operations. Encryption keys should be kept confidential and only accessible to authorized entity with limited operation access. Allowing privileged access to an encryption key also allows to alter/delete the data that is encrypted by it, making the data more easily accessible. It is recommended to have restricted access policies to an encryption key so that only authorized entities can access it with limited operation access. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to your Storage account and display the Encryption settings\n3. Keep note of the Key vault and Key used\n4. Navigate to the Key Vault resource noted\n5. Select Access policies, select the key noted\n6. Click on Edit and make sure only required permissions are checked instead of Select all and only required operations are selected instead of privileged operations as per business requirement.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(management.autoUpgrade does not exist or management.autoUpgrade is false)] exists```,"GCP Kubernetes cluster node auto-upgrade configuration disabled This policy identifies GCP Kubernetes cluster nodes with auto-upgrade configuration disabled. Node auto-upgrades help you keep the nodes in your cluster up to date with the cluster master version when your master is updated on your behalf. When you create a new cluster using Google Cloud Platform Console, node auto-upgrade is enabled by default. FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Click on the alerted cluster and go to section 'Node pools'\n4. Click on a node pool to ensure 'Auto-upgrade' is enabled in the 'Management' section\n5. To modify click on the 'Edit' button at the top\n6. To enable the configuration click on the check box against 'Enable auto-upgrade'\n7. Click on 'Save'\n8. Repeat Step 4-7 for each node pool associated with the reported cluster." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-bigquery-dataset-list' AND json.rule = iamPolicy.bindings[?any(members[*] equals ""allUsers"" or members[*] equals ""allAuthenticatedUsers"")] exists```","GCP BigQuery dataset is publicly accessible This policy identifies BigQuery datasets that are anonymously or publicly accessible. Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. So it is recommended to not allow anonymous and/or public access to BigQuery datasets. This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to service 'BigQuery'(Left Panel)\n3. Under the 'Explorer' section, search for the reported BigQuery dataset and select 'Open' from the kebab menu\n4. Click on dropdown 'SHARING' and select 'Permissions'\n5. In 'Filter', search for 'allUsers' or 'allAuthenticatedUsers', review each attached role and click the delete icon\n6. On the popup 'Remove role from principal?', select the checkbox and click on 'REMOVE'." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains ""ConsoleLogin"" and ($.X.filterPattern contains ""MFAUsed !="" or $.X.filterPattern contains ""MFAUsed!="") and $.X.filterPattern contains ""Yes"" and ($.X.filterPattern contains ""userIdentity.type ="" or $.X.filterPattern contains ""userIdentity.type="") and $.X.filterPattern contains ""IAMUser"" and ($.X.filterPattern contains ""responseElements.ConsoleLogin ="" or $.X.filterPattern contains ""responseElements.ConsoleLogin="") and $.X.filterPattern contains ""Success"") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for management console sign-in without MFA This policy identifies the AWS regions that do not have a log metric filter and alarm for management console sign-in without MFA. A log metric filter in AWS CloudWatch scans log data for specific patterns and generates metrics based on those patterns. Unauthorized access attempts may go undetected without a log metric filter and alarm for console sign-ins without MFA. This increases the risk of account compromise and potential data breaches due to inadequate security monitoring. It is recommended that a metric filter and alarm be established for management console sign-in without MFA to increase visibility into accounts that are not protected by MFA. NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi-trail is enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n\n{ ($.eventName = ""ConsoleLogin"") && ($.additionalEventData.MFAUsed != ""Yes"") && ($.userIdentity.type = ""IAMUser"") && ($.responseElements.ConsoleLogin = ""Success"") }\n\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" AND gceSetup.metadata.notebook-disable-root is false```","GCP Vertex AI Workbench Instance has root access enabled This policy identifies GCP Vertex AI Workbench Instances that have root access enabled. Enabling root access on a GCP Vertex AI Workbench instance increases the risk of unauthorized system changes, privilege escalation, and data exposure. It can also make the instance more vulnerable to attacks if not properly secured. Limiting root access and applying strict access controls are essential to reduce these risks. It is recommended to disable root access for GCP Vertex AI Workbench Instances. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Go to the 'SOFTWARE AND SECURITY' tab\n7. Under 'Modify software and security configuration', disable (uncheck) 'Root access to the instance' checkbox\n8. At the bottom of the page, click 'SUBMIT'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = publiclyAccessible is true```,"AWS Redshift cluster instance with public access setting enabled This policy identifies AWS Redshift clusters with public access setting enabled. AWS Redshift, a managed data warehousing service typically stores sensitive and critical data. Allowing public access increases the risk of unauthorized access, data breaches, and potential malicious activities. As a recommended security measure, it is advised to disable public access for the Redshift cluster. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify the publicly accessible setting of the Redshift cluster,\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the 'Redshift' service.\n4. Click on the checkbox for the identified Redshift cluster name.\n5. In the top menu options, click on 'Actions' and select 'Modify publicly accessible setting' as the option.\n6. Uncheck the checkbox 'Turn on Publicly accessible' in the 'Publicly accessible' section and click on 'Save Changes' button.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='log_duration')].properties.value equals OFF or configurations.value[?(@.name=='log_duration')].properties.value equals off""```","Azure PostgreSQL database server with log duration parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log duration. Enabling log_duration helps the PostgreSQL Database to Logs the duration of each completed SQL statement which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_duration' and set it to on\n6. Click on 'Save' button from top menu to save the change.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equals ""Microsoft.Storage"" as Y; filter ""$.['X'].['properties.storageProfile'].['osDisk'].['vhd'].['uri'] contains $.Y.name""; show Y;```","Azure Storage account containing VHD OS disk is not encrypted with CMK This policy identifies Azure Storage account containing VHD OS disk which are not encrypted with CMK. VHD's attached to Virtual Machines are stored in Azure storage. By default Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts for better control on the data. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on the reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'." ```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = 'destination.bucket exists' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' AND json.rule = (retentionPolicy does not exist ) as Y; filter '($.X.destination.bucket contains $.Y.name)'; show Y;```,"GCP Log bucket retention policy not enabled This policy identifies GCP log buckets for which retention policy is not enabled. Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. It is recommended to configure a data retention policy for these cloud storage buckets to store the activity logs for forensics and security investigations. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console \n2. Navigate to section 'Browser', Under 'Storage' \n3. Select the alerted log bucket\n4. In tab ''RETENTION', click on '+SET RETENTION POLICY' to set a retention policy\n5. Set a value for 'Retention period' in pop-up 'Set a retention policy'\n6. Click on 'SAVE'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","Low of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where api.name = 'gcloud-access-approval-project-approval-setting' AND json.rule = enrolledServices[*].cloudProduct does not equal ""all""```","GCP Cloud ' Access Approval' is not enabled This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-glue-job' AND json.rule = Command.BucketName exists and Command.BucketName contains ""aws-glue-assets-"" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""aws-glue-assets-"" as Y; filter 'not ($.X.Command.BucketName equals $.Y.bucketName)' ; show X;```","aws glue shadow sdcsc This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationPortRange contains _Port.inRange(3389,3389) or destinationPortRanges[*] contains _Port.inRange(3389,3389) ))] exists```","Azure Network Security Group allows all traffic on RDP Port 3389 This policy identifies any NSG rule that allow all traffic on RDP port 3389. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = activity_tracking does not exist or activity_tracking.write_data_events does not equal ignore case ""true"" or activity_tracking.read_data_events does not equal ignore case ""true""```","IBM Cloud Object Storage bucket is not enabled with IBM Activity Tracker This policy identifies IBM Cloud Object Storage buckets which have Activity Tracker disabled or not enabled properly. The IBM Cloud Activity Tracker service records user-initiated activities that change the state of a service in IBM Cloud. You can use this service to investigate abnormal activity and critical actions, and to comply with regulatory audit requirements. In addition, you can be alerted about actions as they happen. So, it is recommended to enable Activity tracker to log all read/write data and management events on a bucket. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To configure Activity Tracker on a Cloud Object Storage bucket, please follow the below URL.\nPlease make sure to select 'Track data events' checkbox and select 'read & write' option \nfrom the Activity Tracker dropdown:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-at#at-console-enable." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and vpcConnector does not exist```,"GCP Cloud Function not enabled with VPC connector This policy identifies GCP Cloud Functions that are not configured with a VPC connector. VPC connector helps function to connect to a resource inside a VPC in the same project. Setting up the VPC connector allows you to set up a secure perimeter to guard against data exfiltration and prevent functions from accidentally sending any data to unwanted destinations. It is recommended to configure the GCP Cloud Function with a VPC connector. Note: For the Cloud Functions function to access the public traffic with Serverless VPC connector, you have to introduce Cloud NAT. Link: https://cloud.google.com/functions/docs/networking/network-settings#route-egress-to-vpc This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings’ drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. Under Section 'Egress settings', select a VPC connector from the dropdown\n8. In case VPC connector is not available, select 'Custom' and\n9. Click on 'Create a Serverless VPC Connector', follow the link to create a Serverless VPC connector: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access\n10. Once the Serverless VPC connector is available, select it from the dropdown\n11. Click on 'NEXT'\n12. Click on 'DEPLOY'." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and tokens[?any( properties.status contains enabled )] exists```,"Azure Container Registry with repository scoped access token enabled This policy identifies Azure Container Registries having repository scoped access tokens enabled. Disable repository-scoped access tokens for your registry to prevent access via tokens. Enhancing security involves disabling local authentication methods, including admin user, repository-scoped access tokens, and anonymous pull. This ensures that container registries rely solely on Microsoft Entra ID identities for authentication. As a security best practice, it is recommended to disable repository scoped access token for Azure Container Registries. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to your Azure portal \n2. Navigate to 'Container registries' \n3. Select the reported Container Registry \n4. Under 'Repository permissions' select 'Tokens'\n5. Click on the active token and make it inactive by unchecking the 'Active status'\n6. Click on 'Save'\n7. Repeat step 5 & 6 for all the active tokens." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = (securityRules[?any((((*.destinationPortRange.min == 3389 or *.destinationPortRange.max == 3389) or (*.destinationPortRange.min < 3389 and *.destinationPortRange.max > 3389)) or (protocol equals ""all"") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))) and (source equals 0.0.0.0/0 and direction equals INGRESS))] exists)```","OCI Network Security Group allows all traffic on RDP port (3389) This policy identifies OCI Security groups that allow unrestricted ingress access to port 3389. It is recommended that no security group allows unrestricted ingress access to port 3389. As a best practice, remove unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Security Rules\n5. If you want to add a rule, click Add Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case ""service"" and name equal ignore case ""serviceType"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name equal ignore case ""region"")] does not exist )] exists and subjects[?any( attributes[?any( name contains ""access_group_id"")] exists )] exists as X; config from cloud.resource where api.name = 'ibm-iam-access-group-member' as Y; config from cloud.resource where api.name = 'ibm-iam-access-group' as Z; filter '$.X.subjects[*].attributes[*].value contains $.Y.access_group.id and $.Y.access_group.id equal ignore case $.Z.id'; show Z;```","IBM Cloud Access group with members having administrator role permission for All Identity and Access enabled services This policy identifies IBM Cloud Access groups, which have administrator role permission across all Identity and Access enabled services policy with users, service IDs, or trusted profiles. This would allow all members of this group to have administrative privileges. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Access groups' in the left panel.\n3. Select the access group which is reported in the alert.\n4. Review and remove Users/Service IDs/Trusted profiles from the access group.\nRefer below link for removing the Member from the access group:\nhttps://cloud.ibm.com/docs/account?topic=account-assign-access-resources&interface=ui#removing-access-console\nOR\nTo remove the overly permissible policy from the access group:\n1. Go to 'Access' tab and click on three dots on the right corner of a row for the policy which is having administrator permission on 'All Identity and Access enabled services'.\n2. Click on Remove OR Edit to assign limited permission to the policy.\n3. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-authorization-policy' AND json.rule = not (allowInvitesFrom equal ignore case adminsAndGuestInviters OR allowInvitesFrom equal ignore case none)```,"Azure Guest User Invite not restricted to users with specific admin role This policy identifies instances in the Microsoft Entra ID configuration where guest user invitations are not restricted to specific administrative roles. Allowing anyone in the organization, including guests and non-admins, to invite guest users can lead to unauthorized access and potential data breaches. This unrestricted access poses a significant security risk. As a best practice, it is recommended to configure guest user invites to specific admin roles. This will ensure that only authorized personnel can invite guests, maintaining tighter control over access to cloud resources. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under 'Manage' select 'External Identities'\n4. Select 'External collaboration settings'\n5. Under 'Guest invite settings' section, select 'Only users assigned to specific admin roles can invite guest users'\n6. Select 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case ""PowerState/running"" and (['properties.osProfile'].['linuxConfiguration'] exists and ['properties.osProfile'].['linuxConfiguration'].['disablePasswordAuthentication'] is false)```","Azure Virtual Machine (Linux) does not authenticate using SSH keys This policy identifies Azure Virtual Machines that have basic authentication, not authenticating using SSH keys. Azure Virtual Machines with basic authentication could allow attackers to brute force and gain unauthorized access, which might lead to potential data leaks. It is recommended to use SSH keys for authentication to avoid brute force attacks on virtual machines. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure existing Azure Virtual machine with SSH key authentication, Follow below URL:\nhttps://learn.microsoft.com/en-us/azure/virtual-machines/extensions/vmaccess#update-ssh-key\n\nIf changes are not reflecting you may need to take backup, You may need to create new virtual machine with SSH key based authentication and delete the reported virtual machine.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_add_remove_child_policy_hyperion_policy_ss_finding_2 Description-e736aef6-4ad4-4324-9b5b-75dd70620202 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'alibaba-cloud-action-trail' as X; config from cloud.resource where api.name = 'alibaba-cloud-oss-bucket-info' as Y; filter '$.X.isLogging is true and $.X.ossBucketName equals $.Y.bucket.name and $.Y.cannedACL does not contain Private'; show Y;```,"Alibaba Cloud ActionTrail log OSS bucket is publicly accessible This policy identifies Object Storage Service (OSS) buckets which are publicly accessible and stores ActionTrail log data. These buckets contain sensitive audit data and only authorized users and applications should have access. As a best practice, make OSS buckets private which stores ActionTrail log data and only authorized user should have access. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Access Control List (ACL)' Section, Click on 'Configure'\n5. For 'Bucket ACL' field, Choose 'Private' option\n6. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = loggingConfiguration.targetBucket does not exist as Y; filter '$.X.s3BucketName equals $.Y.bucketName'; show Y;```,"AWS S3 CloudTrail bucket for which access logging is disabled This policy identifies S3 CloudTrail buckets for which access is disabled. S3 Bucket access logging generates access records for each request made to your S3 bucket. An access log record contains information such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable' option and provide s3 bucket of your choice in the 'Target bucket'\n5. Click on 'Save Changes'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' and json.rule = 'osType exists and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled == false) and encryption.type is not member of (""EncryptionAtRestWithCustomerKey"", ""EncryptionAtRestWithPlatformAndCustomerKeys"")'```","Azure VM OS disk is encrypted with the default encryption key instead of ADE/CMK This policy identifies the OS disks which are encrypted with the default encryption key instead of ADE/CMK. Azure encrypts OS disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable SSE with Azure Disk Encryption [SSE with PMK+ADE],\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case ""Succeeded"" and properties.properties.state equal ignore case ""Running"" and properties.properties.osImageMetadata.isLatestOsImageVersion is false```","Azure Machine Learning compute instance not running latest OS Image Version This policy identifies Azure Machine Learning compute instances not running on the latest available image version. Running compute instances on outdated image versions increases security risks. Without the latest security patches and updates, these instances are more vulnerable to attacks, which can compromise machine learning models and data. As a best practice, it is recommended to recreate or update Azure Machine Learning compute instances to the latest image version, ensuring they have the most recent security patches and updates. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To ensure your Azure Machine Learning compute instances are running the latest available image version, follow these remediation steps:\n\n1. Recreate the Compute Instance. This will ensure it is provisioned with the latest VM image, including all recent updates and security patches.\n- Steps:\n 1. Backup Important Data:\n - Store notebooks in the `User files` directory to persist them.\n - Mount data to persist files.\n 2. Re-create the Instance:\n - Delete the existing compute instance.\n - Provision a new compute instance with latest OS image version.\n 3. Restore Data:\n - Restore notebooks and mounted data to the newly created instance.\n\nNote: This will result in the loss of data and customizations stored on the instance's OS and temporary disks.." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="") and ($.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="") and $.X.filter contains ""gcs_bucket"" and ($.X.filter contains ""protoPayload.methodName="" or $.X.filter contains ""protoPayload.methodName ="") and ($.X.filter does not contain ""protoPayload.methodName!="" and $.X.filter does not contain ""protoPayload.methodName !="") and $.X.filter contains ""storage.setIamPermissions""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for Cloud Storage IAM permission changes This policy identifies the GCP account which does not have a log metric filter and alert for Cloud Storage IAM permission changes. Monitoring Cloud Storage IAM permission activities will help in reducing time to detect and correct permissions on sensitive Cloud Storage bucket and objects inside the bucket. It is recommended to create a metric filter and alarm to detect activities related to the Cloud Storage IAM permission. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""gcs_bucket"" AND protoPayload.methodName=""storage.setIamPermissions""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway-waf-policy' AND json.rule = properties.applicationGateways[*].id size greater than 0 and properties.policySettings.state equal ignore case Enabled and properties.policySettings.mode does not equal ignore case Prevention```,"Azure Application Gateway WAF policy is not enabled in prevention mode This policy identifies the Azure Application Gateway WAF policies that are not enabled in prevention mode. Azure Application Gateway WAF policies support Prevention and Detection modes. Detection mode monitors and logs all threat alerts to a log file. Detection mode is useful for testing purposes and configures WAF initially but it does not provide protection. It logs the traffic, but it doesn't take any actions such as allow or deny. Where as, in Prevention mode, WAF analyzes incoming traffic to the application gateway and blocks any requests that are determined to be malicious based on a set of rules. As a best security practice, it is recommended to enable Application Gateway WAF policies with Prevention mode to prevent malicious requests from reaching your application and potentially causing damage. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to 'Web Application Firewall policies (WAF)' dashboard\n3. Click on the reported WAF policy\n4. In 'Overview' section, Click on 'Switch to prevention mode'.\n\nNOTE: Define managed rule or custom rules properly as per your business requirement prior to transition to Prevention mode. This can help in unexpected blocked traffic.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ""$.serverSecurityAlertPolicy.properties.retentionDays does not exist or $.serverSecurityAlertPolicy.properties.state equals Disabled""```","Azure SQL server Defender setting is set to Off This policy identifies Azure SQL server which have Defender setting set to Off. Azure Defender for SQL provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, SQL injection attacks, as well as anomalous database access patterns. Advanced threat protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to the reported SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0```,"Copy of GCP API key is created for a project1 This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion.." "```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains ""aws-emr-studio-"" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""aws-emr-studio-"" as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;```","AWS EMR shadow resource sdvdsv This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals * and access equals Allow and destinationPortRange contains * and direction equals Inbound)] exists```,"Azure Network Security Group having Inbound rule overly permissive to all traffic on any protocol This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on any protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-subnet' and json.rule = public_gateway exists```,"IBM Cloud Virtual Private Cloud (VPC) Subnet has public gateways attached This policy identifies IBM Virtual Private Cloud Subnet where public gateway attached. A Public Gateway enables resources to connect to the internet. After public gateway is attached, all resources in that subnet can connect to the internet. If the use case for public gateway is not external connectivity, it is recommended not to attach any public gateways. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Public Gateways'\n3. Select the 'Public Gateway' reported in the alert\n4. From the drop down select Detach\n5. Safely detach the public gateway and then delete the public gateway." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'authTokenEnabled is false or transitEncryptionEnabled is false or authTokenEnabled does not exist or transitEncryptionEnabled does not exist'```,"AWS ElastiCache Redis cluster with Redis AUTH feature disabled This policy identifies ElastiCache Redis clusters which have Redis AUTH feature disabled. Redis AUTH can improve data security by requiring the user to enter a password before they are granted permission to execute Redis commands on a password protected Redis server. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster Redis AUTH password can be set, only at the time of creation of the cluster. So to resolve this alert, create a new cluster with Redis AUTH feature enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete the reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with Redis AUTH password set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption\nNote: Redis AUTH can only be enabled when creating clusters where in-transit encryption is enabled.\nf. Select 'Redis AUTH' checkbox to enable to enable AuthToken password\ng. Type password you want enforce on 'Redis AUTH Token' textbox.\nChoosen password should meet 'Passwords must be at least 16 and a maximum of 128 printable characters. At least 16 characters, and maximum 128 characters, restricted to any printable ASCII character except ' ', '""', '/' and '@' signs' criteria. Set the new Redis cluster other parameters which are same as of reported Redis cluster configuration details.\nNote: The password set at cluster creation cannot be changed.\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want back for you cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case ""/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace""```","test-3 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-roles' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' as Y; filter ""($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal *) or ($.X.attachedPolicies[*].policyArn contains $.Y.policyArn and $.Y.document.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal *)""; show X;```","AWS IAM Roles with Administrator Access Permissions This policy identifies AWS IAM roles which has administrator access permission set. This would allow all users who assume this role to have administrative privileges. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to IAM service\n3. Click on Roles\n4. Click on reported IAM role\n5. Under 'Permissions policies' click on 'X' to detach or remove the policy having excessive permissions and assign a limited permission policy as required for a particular role.." "```config from cloud.resource where finding.type IN ( 'Host Vulnerability', 'Serverless Vulnerability' , 'Compliance' , 'AWS Inspector Runtime Behavior Analysis' , 'AWS Inspector Security Best Practices' , 'AWS GuardDuty Host' , 'AWS GuardDuty IAM' ) ```","Hostfindings test This is applicable to all cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-mq-broker' AND json.rule = brokerState equal ignore case RUNNING as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equal ignore case Enabled and keyMetadata.keyManager does not equal ignore case CUSTOMER as Y; filter '$.X.encryptionOptions.kmsKeyId equals $.Y.keyMetadata.arn or $.X.encryptionOptions.useAwsOwnedKey is true'; show X;```,"AWS MQ Broker is not encrypted by Customer Managed Key (CMK) This policy identifies AWS MQ Brokers that are not encrypted by Customer Managed Key (CMK). AWS MQ Broker messages might contain sensitive information. AWS MQ Broker messages are encrypted by default by an AWS managed key but users can specify CMK to get enhanced security, control over the encryption key, and also comply with any regulatory requirements. As a security best practice use of CMK to encrypt your MQ Broker is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: AWS MQ Broker encryption option can be done only at the creation of MQ broker. You cannot change the encryption options once it has been created. To resolve this alert create a new MQ broker configuring encryption with CMK key, migrate all data to newly created MQ broker and then delete the reported MQ broker.\n\nTo create a new AWS MQ broker encryption with CMK key,\n1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the AWS MQ broker Dashboard\n4. Click on 'Create brokers'\n5. Select the broker engine type, deployment mode as per your business requirement\n6. Under 'Configure settings', In Additional settings section choose Encryption option choose 'Customer managed CMKs are created and managed by you in AWS Key Management Service (KMS).' based on your business requirement.\n7. Review and Create the MQ broker.\n\nTo delete reported MQ broker, refer following URL:\nFor ActiveMQ Broker: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-activemq.html#delete-broker\nFor RabbitMQ Broker: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-rabbitmq.html#rabbitmq-delete-broker." ```config from cloud.resource where api.name = 'oci-object-storage-bucket' as X; config from cloud.resource where api.name = 'oci-logging-logs' as Y; filter 'not ($.X.name contains $.Y.configuration.source.resource and $.Y.configuration.source.service contains objectstorage and $.Y.configuration.source.category contains write and $.Y.lifecycleState equal ignore case ACTIVE )'; show X;```,"OCI Object Storage Bucket write level logging is disabled This policy identifies Object Storage buckets that have write-level logging disabled. Enabling write-level logging for Object Storage provides more visibility into changes to objects in your buckets. Without write-level logging, there is no record of changes made to the bucket. This lack of visibility can lead to undetected data breaches, unauthorized changes, and compliance violations. As a best practice, it is recommended to enable write-level logging on Object Storage buckets. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: First, if a log group for holding these logs has not already been created, create a log group by the following steps:\n\n1. Login to the OCI Console\n2. Go to the Log Groups page\n3. Click the 'Create Log Group' button in the middle of the screen\n4. Select the relevant compartment to place these logs\n5. Type a name for the log group in the 'Name' box.\n6. Add an optional description in the 'Description' box\n7. Click the 'Create' button in the lower left hand corner\n\nSecond, enable Object Storage write log logging for reported bucket by the following steps:\n1. Login to the OCI Console\n2. Go to the Logs page\n3. Click the 'Enable Service Log' button in the middle of the screen\n4. Select the relevant resource compartment\n5. Select ‘Object Storage’ from the Service drop-down menu \n6. Select the reported bucket from the ‘Resource’ drop-down menu \n7. Select ‘Write Access Events’ from the ‘Log Category’ drop-down menu \n8. Type a name for your Object Storage write log in the ‘Log Name’ drop-down menu \n9. Click the ‘Enable Log’ button in the lower left hand corner." ```config from cloud.resource where api.name = 'azure-active-directory-user-registration-details' AND json.rule = isMfaRegistered is false as X; config from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = accountEnabled is true as Y; filter '$.X.userDisplayName equals $.Y.displayName'; show X;```,"Azure Active Directory MFA is not enabled for user This policy identifies Azure users for whom AD MFA (Active Directory Multi-Factor Authentication) is not enabled. Azure AD is a simple best practice that adds an extra layer of protection on top of your user name and password. MFA provides increased security for your Azure account settings and resources. Enabling Azure AD Multi-Factor Authentication using Conditional Access policies is the recommended approach to protect users. As best practice, it is recommended to enable Azure AD Multi-Factor Authentication for users. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: To enable per-user Azure AD Multi-Factor Authentication; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.clientCertEnabled equals false'```,"Azure App Service Web app client certificate is disabled This policy identifies Azure web apps which are not set with client certificate. Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Incoming client certificates', Set 'Client certificate mode' to 'Require'\n6. Click on 'Save'\n\nNote: App Services with Free sku plan is ideal for testing applications in a managed Azure environment. For Free sku plan client certificates option is not supported. We recomended to upgrade such reported app service as per your requirement apart from free sku plan.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = properties.status equals ""Active"" and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)```","Bobby Copy of Azure Service bus namespace not configured with Azure Active Directory (Azure AD) authentication This policy identifies Service bus namespaces that are not configured with Azure Active Directory (Azure AD) authentication and are enabled with local authentication. Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there's no need to store the tokens in your code and risk potential security vulnerabilities. It is recommended to configure the Service bus namespaces with Azure AD authentication so that all actions are strongly authenticated. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To configured Azure Active Directory (Azure AD) authentication and disable local authentication on existing Service bus, follow below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/disable-local-authentication." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = encryption.defaultKmsKeyName does not exist```,"GCP Storage Bucket encryption not configured with Customer-Managed Encryption Key (CMEK) This policy identifies GCP Storage Buckets that are not configured with a Customer-Managed Encryption key. GCP Storage Buckets might contain sensitive information. Google Cloud Storage service encrypts all data within the buckets using Google-managed encryption keys by default but users can specify Customer-Managed Keys (CMKs) to get enhanced security, control over the encryption key, and also comply with any regulatory requirements. As a security best practice, the use of CMK to encrypt your Storage bucket is advisable as it gives you full control over the encrypted data. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the GCP storage bucket with customer-managed encryption, follow the below steps:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the Cloud Storage Buckets page.\n2. Click on the name of the bucket where you want to enable customer-managed encryption.\n3. Under the 'Configuration' tab, under the 'Protection' section, select the 'Edit encryption type' option.\n4. A 'Edit encryption' dialogue box will appear. Select the 'Customer-managed encryption key' option.\n5. Under the 'Select a customer-managed key' dropdown, select the KMS key to be used for encryption.\n6. Click on 'SAVE'.\n\nNote: Make sure the storage bucket service account has cloudkms.cryptoKeyEncrypterDecrypter permissions to encrypt or decrypt with the selected key.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(properties.pricingTier does not equal Standard and (properties.deprecated does not exist or properties.deprecated is false))] exists```,"Azure Microsoft Defender for Cloud Defender plans is set to Off This policy identifies Azure Microsoft Defender for Cloud which has a Defender setting set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. It is highly recommended to enable Azure Defender for all Azure services. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Select Defender plan by resource type' Select 'Enable all'.\n8. Select 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals VirtualMachines and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for Servers This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Servers is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Servers. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Servers' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 3389 or *.destinationPortRange.max == 3389) or (*.destinationPortRange.min < 3389 and *.destinationPortRange.max > 3389)) or (protocol equals ""all"") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)```","OCI security lists allows unrestricted ingress access to port 3389 This policy identifies OCI Security lists that allow unrestricted ingress access to port 3389. It is recommended that no security list allows unrestricted ingress access to port 3389. As a best practice, remove unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-server-certificates' AND json.rule = '(_DateTime.ageInDays($.expiration) > -1)'```,"AWS IAM has expired SSL/TLS certificates This policy identifies expired SSL/TLS certificates. To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. This policy generates alerts if there are any expired SSL/TLS certificates stored in AWS IAM. As a best practice, it is recommended to delete expired certificates. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Removing invalid certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\nRemediation CLI:\n1. Run describe-load-balancers command to make sure that the expired server certificate is not currently used by any active load balancer.\n aws elb describe-load-balancers --region --load-balancer-names --query 'LoadBalancerDescriptions[*].ListenerDescriptions[*].Listener.SSLCertificateId'\nThis command output will return the Amazon Resource Name (ARN) for the SSL certificate currently used by the selected ELB:\n [\n [\n ""arn:aws:iam::1234567890:server-certificate/MyCertificate""\n ]\n ]\n2. If the load balancer listener using the reported expired certificate is not removed before the certificate, the ELB may continue to use the same certificate and work improperly. To delete the ELB listener that is using the expired SSL certificate, run following command:\n aws elb delete-load-balancer-listeners --region --load-balancer-name --load-balancer-ports 443\n3. Now that is safe to remove the expired SSL/TLS certificate from AWS IAM, To delete it run:\n aws iam delete-server-certificate --server-certificate-name ." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = 'notebookInstanceStatus equals InService and directInternetAccess equals Enabled'```,"AWS SageMaker notebook instance configured with direct internet access feature This policy identifies SageMaker notebook instances that are configured with direct internet access feature. If AWS SageMaker notebook instances are configured with direct internet access feature, any machine outside the VPC can establish a connection to these instances, which provides an additional avenue for unauthorized access to data and the opportunity for malicious activity. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/appendix-notebook-and-internet-access.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance direct internet access feature can not be disabled; once it is created. You need to create a new notebook instance with disabled direct internet access feature; and migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Network' section, \nFrom 'VPC – optional' dropdown list, select the VPC where you want to deploy a new SageMaker notebook instance.\n5. Select the 'Disable - Access the internet through a VPC' button under the 'Direct internet access' to disable direct internet access for the new notebook instance.\n6. Choose other parameters as per your requirement and Click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete dialog box, click the Delete button to confirm the action.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = 'properties.sslEnforcement does not equal Enabled'```,"Azure MySQL Database Server SSL connection is disabled This policy identifies Azure MYSQL database server for which the SSL connection is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for MySQL server'\n3. Click on the reported database, select 'Connection security' from left panel\n4. In 'SSL settings' section,\n5. Ensure 'Enforce SSL connection' is set to 'ENABLED'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or maxPasswordAge !isType Integer or maxPasswordAge < 1 or maxPasswordAge does not exist'```,"AWS IAM password policy does not have password expiration period Checks to ensure that IAM password policy has an expiration period. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Enable password expiration' and enter a password expiration period.\n4. Click on 'Apply password policy'." ```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-cloud-spanner-database' AND json.rule = state equal ignore case ready AND enableDropProtection does not exist```,"GCP Spanner Database drop protection disabled This policy identifies GCP Spanner Databases with drop protection disabled. Google Cloud Spanner is a scalable, globally distributed, and strongly consistent database service. The Spanner database drop protection feature prevents accidental deletion of databases and configurations. Without drop protection enabled, a user error or malicious action could lead to irreversible data loss and service disruption for all applications relying on that Spanner instance. It is recommended to enable drop protection on spanner database to prevent from accidental deletion. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable drop protection on a cloud spanner database, use the below CLI command:\n\ngcloud spanner databases update --instance= --enable-drop-protection\n\nPlease refer to the URL mentioned below for more details on how to enable drop protection:\nhttps://cloud.google.com/spanner/docs/prevent-database-deletion#enable\n\nPlease refer to the URL mentioned below for more details on the cloud spanner update command:\nhttps://cloud.google.com/sdk/gcloud/reference/spanner/databases/update." "```config from cloud.resource where api.name= 'gcloud-compute-instances-list' and json.rule = ['metadata'].items does not exist and (status equals RUNNING and name does not start with ""gke-"")```","GCP VM Instances without any Custom metadata information VM instance does not have any Custom metadata. Custom metadata can be used for easy identification and searches. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console and from Compute, select Compute Engine.\n 2. Select the identified VM instance to see the details.\n 3. In the details page, click on Edit and navigate to Custom metadata section.\n 4. Add the appropriate Key:Value information and save.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""logdnaat"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""logGroup"",""resourceType"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""iam-ServiceId"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```","IBM Cloud Service ID with IAM policies provide administrative privileges for Activity Tracker Service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for Activity Tracker service. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Activity Tracker' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where api.name = 'ibm-vpc-block-storage-volume' as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```,"IBM Cloud Block Storage volume for VPC is not encrypted with BYOK This policy identifies IBM Cloud Block storage volumes that are not encrypted with Bring Your Own keys(BYOK). As a best practice, it is recommended to use BYOK so that no one outside the organization has access to the root key and only authorized identities have access to maintain the lifecycle of the keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A storage volume can be encrypted with BYOK only at the time of creation. Please\nCreate a snapshot using the below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a storage volume from the above-created snapshot with BYOK, refer to below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-snaphot-list-ui\n\n1. Under the 'Encryption at rest' section, select 'Key Protect'.\n2. Under 'Encryption service instance' and 'Key name', select the instance and key to be used for encryption.\n3. Click 'Create block storage volume' button. The side panel closes, and a message indicate the restored volume.\n\nPlease delete the reported block storage volume using the below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-db-list' AND json.rule = transparentDataEncryption is false```,"Azure SQL database Transparent Data Encryption (TDE) encryption disabled This policy identifies SQL databases in which Transparent Data Encryption (TDE) is disabled. TDE encryption performs real-time encryption and decryption of the database, related reinforcements, and exchange log records without requiring any changes to the application. It encrypts the storage of an entire database by using a symmetric key called the database encryption key. It is recommended to have TDE encryption on your SQL databases to protect the database from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to Azure Portal\n2. Click on SQL databases (Left Panel)\n3. Choose the reported database\n4. Under Security, Click on Transparent data encryption\n5. Set Data encryption to ON\n6. Click on Save." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals ""0.0.0.0/0"" and direction equals ""inbound"" and ( protocol equals ""all"" or ( protocol equals ""tcp"" and ( port_max greater than 22 and port_min less than 22 ) or ( port_max equals 22 and port_min equals 22 ))))] exists```","IBM Cloud Security Group allow all traffic on SSH port (22) This policy identifies IBM Cloud Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 22 (or range containing 22)\n6. Click on 'Delete'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'```,"bbaotest2 tested This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains ""eventName="" or $.X.filterPattern contains ""eventName ="") and ($.X.filterPattern does not contain ""eventName!="" and $.X.filterPattern does not contain ""eventName !="") and $.X.filterPattern contains CreateTrail and $.X.filterPattern contains UpdateTrail and $.X.filterPattern contains DeleteTrail and $.X.filterPattern contains StartLogging and $.X.filterPattern contains StopLogging) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for CloudTrail configuration changes This policy identifies the AWS regions which do not have a log metric filter and alarm for CloudTrail configuration changes. Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-instance' AND json.rule = status equal ignore case ""running"" AND network_interfaces[?any( floating_ips is not empty)] exists```","IBM Cloud Virtual Servers for VPC instance have floating IP address This policy identifies IBM Cloud Virtual Servers for VPC instances which have floating IP assigned. If any virtual server instance has floating IP address attached, it can be reachable from public internet independent of whether its subnet is attached to a public gateway. It is recommended to not attach any floating IP to virtual server instances. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instances'\n3. Select the 'Virtual server instances' reported in the alert\n4. Under 'Network Interfaces' tab, click on edit icon \n5. Under 'Floating IP' dropdown, select 'Unbind current floating IP'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_ss_finding_2 Description-abe3365a-9395-4eb7-8d0f-9b3ea0735c7b This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'```,"AWS Redshift database does not have audit logging enabled Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = ['properties.httpListeners'][*].['properties.protocol'] equals Http```,"Azure Application gateways listener that allow connection requests over HTTP This policy identifies Azure Application gateways that are configured to accept connection requests over HTTP. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the application gateways. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'All services'\n3. Select 'Application gateways', under NETWORKING\n4. Select the Application gateways needed to be modified\n5. Select 'Listeners' under Settings\n6. To add HTTPS listener follow below step, if already HTTPS listener present jump to point 10\n7. Click on 'Add listener', enter 'Listener name', 'Frontend IP'\n8. Select 'Protocol' as HTTPS and fill in 'Https Settings' and 'Additional settings' and click on 'Add'\n9. Click on 'Rules' in the left pane and click on 'Request routing rule' and associate HTTPS listener to a rule \n10. Click on three dots on the right corner of a row containing 'Protocol' as HTTP\n11. Click on 'Delete'." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1521,1521)""```","Alibaba Cloud Security group allow internet traffic to Oracle DB port (1521) This policy identifies Security groups that allow inbound traffic on Oracle DB port (1521) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1521, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-route-tables' AND json.rule = ""routes[?(@.state == 'active' && @.instanceId)].destinationCidrBlock contains 0.0.0.0/0""```","AWS NAT Gateways are not being utilized for the default route This policy identifies Route Tables which have NAT instances for the default route instead of NAT gateways. It is recommended to use NAT gateways as the AWS managed NAT Gateway provides a scalable and resilient method for allowing outbound internet traffic from your private VPC subnets. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: To create a NAT gateway:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, choose 'NAT Gateways'\n5. Click on 'Create NAT Gateway', Specify the subnet in which to create the NAT gateway, and select the allocation ID of an Elastic IP address to associate with the NAT gateway. When you're done, Click on 'Create a NAT Gateway'. The NAT gateway displays in the console. After a few moments, its status changes to Available, after which it's ready for you to use.\n\nTo update Route Table:\nAfter you've created your NAT gateway, you must update your route tables for your private subnets to point internet traffic to the NAT gateway. We use the most specific route that matches the traffic to determine how to route the traffic.\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, choose 'Route Tables'\n5. Select the reported route table associated with your private subnet \n6. Choose 'Routes' and Click on 'Edit routes'\n7. Replace the current route that points to the NAT instance with a route to the NAT gateway\n8. Click on 'Save routes'." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(23,23)""```","Alibaba Cloud Security group allow internet traffic to Telnet port (23) This policy identifies Security groups that allow inbound traffic on Telnet port (23) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 23, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-training-pipeline' as Y; filter ' $.Y.trainingTaskOutputDirectory contains $.X.id '; show X;```,"GCP Storage Bucket storing GCP Vertex AI training pipeline output model This policy identifies publicly exposed GCS buckets that are used to store the GCP Vertex AI training pipeline output model. GCP Vertex AI training pipeline output models are stored in the Storage bucket. Vertex AI training pipeline output model is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per your organization's security and compliance requirements. It is recommended to monitor, identify, and evaluate storage location for the GCP Vertex AI training pipeline output model regularly to prevent unauthorized access and AI model thefts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Review and validate the Vertex AI training pipeline output models are stored in the right Storage buckets. Move and/or delete the model and other related artifacts if they are found in an unexpected location. Review how the Vertex AI training pipeline was configured to output to an unauthorised/unapproved storage bucket.." ```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```,"Edited_ayiumvbvgu_ui_auto_policies_tests_name lvcskhftle_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals ""all"") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)```","OCI Security List allows all traffic on SSH port (22) This policy identifies OCI Security lists that allow unrestricted ingress access to port 22. It is recommended that no security list allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.adminUserEnabled is true```,"Azure Container Registry with local admin account enabled This policy identifies Azure Container Registries having local admin account enabled. Enabling the admin account allows access to the registry through username and password, bypassing Microsoft Entra ID authentication. Disabling the local admin account improves security by enforcing exclusive use of Microsoft Entra ID identities, which provide centralized management, enhanced auditing, and better control over permissions. By relying solely on Microsoft Entra ID for authentication, the risk of unauthorized access through local credentials is mitigated, ensuring stronger protection for your container registry. As a security best practice, it is recommended to disable local admin account for Azure Container Registries. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to your Azure portal\n2. Navigate to 'Container registries'\n3. Select the reported Container Registry\n4. Under 'Settings' select 'Access Keys'\n5. Ensure that the 'Admin user' box is unchecked." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = viewerCertificate.certificateSource does not contain cloudfront and viewerCertificate.minimumProtocolVersion does not equal TLSv1.2_2021```,"AWS CloudFront web distribution using insecure TLS version This policy identifies AWS CloudFront web distributions which are configured with TLS versions for HTTPS communication between viewers and CloudFront. As a best practice, use recommended TLSv1.2_2021 as the minimum protocol version in your CloudFront distribution security policies. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Navigate to CloudFront Distributions Dashboard\n3. Click on the reported distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. On 'Edit Distribution' page, Set 'Security Policy' to TLSv1.2_2021\n6. Click on 'Save changes'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and domainEndpointOptions.enforceHTTPS is false```,"AWS Elasticsearch domain is not configured with HTTPS This policy identifies Elasticsearch domains that are not configured with HTTPS. Amazon Elasticsearch domains allow all traffic to be submitted over HTTPS, ensuring all communications between application and domain are encrypted. It is recommended to enable HTTPS so that all communication between the application and all data access goes across an encrypted communication channel to eliminate man-in-the-middle attacks. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Elasticsearch dashboard\n4. Click on reported Elasticsearch domain\n5. Click on 'Actions', from drop-down choose 'Modify encryptions'\n6. In 'Modify encryptions' page, Select 'Require HTTPS for all traffic to the domain'\n7. Click on Submit." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = diagnosticSettings.value[*] size equals 0```,"Azure Key vaults diagnostics logs are disabled This policy identifies Azure Key vaults which has diagnostics logs disabled. Enabling Diagnostic Logs gives visibility into the data plane thus gives organisation ability to detect reconnaissance, authorization attempts or other malicious activity. It is recommended to enable diagnostics logs settings for Azure Key vaults. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Select 'Diagnostic settings' under 'Monitoring' section\n4. Click on '+Add diagnostic setting'\n5. Specify a 'Diagnostic settings name',\n6. Under 'Category details' section, select the type of 'Log' that you want to enable\n7. Under section 'Destination details',\na. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\nb. If you set 'Archive to storage account', select the 'Subscription', 'Storage account' and set the 'Retention (days)'\nc. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_lock_waits')] does not exist or settings.databaseFlags[?(@.name=='log_lock_waits')].value equals off)""```","GCP PostgreSQL instance database flag log_lock_waits is disabled This policy identifies PostgreSQL database instances in which database flag log_lock_waits is not set. Enabling the log_lock_waits flag can be used to identify poor performance due to locking delays or if a specially-crafted SQL is attempting to starve resources through holding locks for excessive amounts of time. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_lock_waits' from the drop-down menu and set the value as 'On'\nOR\nIf the flag has been set to off, Under 'Configuration options', In 'Flags' section choose the flag 'log_lock_waits' and set the value as 'On'\n6. Click Save." ```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-sagemaker-training-job' as Y; filter '$.Y.OutputDataConfig.bucketName equals $.X.bucketName'; show X;```,"AWS S3 bucket used for storing AWS Sagemaker training job output This policy identifies the AWS S3 bucket used for storing AWS Sagemaker training job output. S3 buckets hold the results and artifacts generated from training machine learning models in Sagemaker. Ensuring proper configuration and access control is crucial to maintaining the security and integrity of the training output. Improperly secured S3 buckets used for storing AWS Sagemaker training output can lead to unauthorized access, data breaches, and potential exposure of sensitive model information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Sagemaker training job output and ensure compliance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the Sagemaker training job, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = 'properties.sslEnforcement contains Disabled'```,"Azure PostgreSQL database server with SSL connection disabled This policy identifies PostgreSQL database servers for which SSL enforce status is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against ""man in the middle"" attacks by encrypting the data stream between the server and application. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Connection security' under 'Settings' block.\n5. In 'SSL settings' block, for 'Enforce SSL connection' field, click on 'Enabled’ on the toggle button\n6. Click on 'Save' button from top menu to save the change.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(config.shieldedInstanceConfig.enableIntegrityMonitoring does not exist or config.shieldedInstanceConfig.enableIntegrityMonitoring is false)] exists```,"GCP Kubernetes cluster shielded GKE node with integrity monitoring disabled This policy identifies GCP Kubernetes cluster shielded GKE nodes that are not enabled with Integrity Monitoring. Integrity Monitoring provides active alerting for Shielded GKE nodes which allows administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Once a Node pool is provisioned, it cannot be updated to enable Integrity monitoring. You must create new Node pools within the cluster with Integrity monitoring enabled. You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool, then delete the non-conforming pools.\n\nTo create a node pool with Integrity monitoring enabled follow the below steps,\n\n1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. Select the alerted cluster and click 'ADD NODE POOL'\n4. Ensure that the 'Enable integrity monitoring' checkbox is checked under the 'Shielded options' in section 'Security'\n5. Click on 'CREATE'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'type does not equal IMPORTED and (options.certificateTransparencyLoggingPreference equals DISABLED or options.certificateTransparencyLoggingPreference does not exist) and status equals ISSUED and _DateTime.ageInDays($.notAfter) < 1'```,"AWS Certificate Manager (ACM) has certificates with Certificate Transparency Logging disabled This policy identifies AWS Certificate Manager certificates in which Certificate Transparency Logging is disabled. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. Certificate Transparency Logging is used to guard against SSL/TLS certificates that are issued by mistake or by a compromised CA, some browsers require that public certificates issued for your domain can also be recorded. This policy generates alerts for certificates which have transparency logging disabled. As a best practice, it is recommended to enable Transparency logging for all certificates. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot currently use the console to opt out of or into transparency logging. It's recommended to use the command line utility to enable transparency logging.\n\nRemediation CLI:\n1. Use the below command to list ACM certificate\n aws acm list-certificates\n2. Note the 'CertificateArn' of the reported ACM certificate\n3. Use the below command to ENABLE Certificate Transparency Logging\n aws acm update-certificate-options --certificate-arn --options CertificateTransparencyLoggingPreference=ENABLED\nwhere 'CertificateArn' is captured in the step2." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)```,"Azure Cognitive Services account configured with local authentication This policy identifies Azure Cognitive Services accounts that are configured with local authentication methods instead of AD identities. Local authentication allows users to access the service using a local account and password, rather than an Azure Active Directory (Azure AD) account. Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Active Directory identities exclusively for authentication. It is recommended to disable local authentication methods on your Cognitive Services account, instead use Azure Active Directory identities. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable local authentication in Azure AI Services, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/disable-local-auth." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = 'dnssecConfig.defaultKeySpecs[*].keyType contains keySigning and dnssecConfig.defaultKeySpecs[*].algorithm contains rsasha1'```,"GCP Cloud DNS zones using RSASHA1 algorithm for DNSSEC key-signing This policy identifies the GCP Cloud DNS zones which are using the RSASHA1 algorithm for DNSSEC key-signing. DNSSEC is a feature of the Domain Name System that authenticates responses to domain name lookups and also prevents attackers from manipulating or poisoning the responses to DNS requests. So the algorithm used for key signing should be recommended one and it should not be weak. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently, DNSSEC key-signing can be updated using command line interface only.\n1. If you need to change the settings for a managed zone where it has been enabled, you have to turn DNSSEC off and then re-enable it with different settings. To turn off DNSSEC, run following command:\ngcloud dns managed-zones update --dnssec-state off\n2. To update key-signing for a reported managed DNS Zone, run following command:\ngcloud dns managed-zones update --dnssec-state on --ksk-algorithm --ksk-key-length --zsk-algorithm --zsk-key-length --denial-of-existence ." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case ""Running"" AND kind contains ""functionapp"" AND kind does not contain ""workflowapp"" AND kind does not equal ""app"" AND properties.httpsOnly is false```","Azure Function App doesn't redirect HTTP to HTTPS This policy identifies Azure Function App which doesn't redirect HTTP to HTTPS. Azure Function App can be accessed by anyone using non-secure HTTP links by default. Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. It is recommended to enforce HTTPS-only traffic. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'TLS/SSL settings'\n5. In 'Protocol Settings', Set 'HTTPS Only' to 'On'." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""databases-for-postgresql"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resourceGroupId"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Databases for PostgreSQL service This policy identifies IBM Cloud users with administrator role permission for Databases for PostgreSQL service. A user has full platform control as an administrator, including the ability to assign other users access policies and modify deployment passwords. If a user with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Databases for PostgreSQL' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ""(publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.ignorePublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicPolicy is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicPolicy is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.restrictPublicBuckets is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))""```","AWS S3 Buckets Block public access setting disabled This policy identifies AWS S3 buckets which have 'Block public access' setting disabled. Amazon S3 provides 'Block public access' setting to manage public access of AWS S3 buckets. Enabling 'Block public access' setting prevents S3 resource data being accidentally or maliciously becoming publicly accessible. It is highly recommended to enable 'Block public access' setting for all AWS s3 buckets appropriately. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Block public access' click on 'Edit'\n6. Select 'Block all public access' checkbox\n7. Click on Save\n8. 'Confirm' the changes\n\nNote: Make sure updating 'Block public access' setting does not affect S3 bucket data access.." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy kuzde This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'aws-account-management-alternate-contact' group by account as X; filter ' AlternateContactType is not member of (""SECURITY"") ' ;```","AWS account security contact information is not set This policy identifies the AWS account which has not set security contact information. Providing dedicated contact information for security specific, AWS can directly communicate security advisories to the team responsible for handling security-related issues. Failure to specify security contact info in AWS risks missing critical advisories, leading to delayed incident response and increased vulnerability exposure. It is recommended to set security contact information to receive notifications. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following link to add or edit the alternate contacts for any AWS account in your organization\n\nhttps://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-ec2-describe-snapshots' AND json.rule='createVolumePermissions[*].group contains all'```,"AWS EBS snapshots are accessible to public This policy identifies EC2 EBS snapshots which are accessible to public. Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. If EBS snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EC2' service.\n4. Under the 'Elastic Block Storage', click on the 'Snapshots'.\n5. For the specific Snapshots, change the value of field 'Property' to 'Private'.\n6. Under the section 'Encryption Details', set the value of 'Encryption Enabled' to 'Yes'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 3389 or fromPort == 3389) or (toPort > 3389 and fromPort < 3389)))] exists)```,"AWS Security Group allows all traffic on RDP port (3389) This policy identifies Security groups that allow all traffic on RDP port 3389. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 3389 (or range containing 3389)." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = isLegacy is true and (properties.retentionPolicy does not exist or properties.retentionPolicy.enabled is false or (properties.retentionPolicy.enabled is true and (properties.retentionPolicy.days does not equal 0 and properties.retentionPolicy.days < 365)))```,"Azure Activity Log retention should not be set to less than 365 days This policy identifies Log profiles which have log retention set to less than 365 days. Log profile controls how your Activity Log is exported and retained. Since the average time to detect a breach is over 200 days, it is recommended to retain your activity log for 365 days or more in order to have time to respond to any incidents. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If a log profile already exists, you first must remove the existing log profile, and then create a log profile.\nFollow URL to create new log profile:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=cli#managing-legacy-log-profiles\nMake sure you set retention days to 365 or more days.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-resource-compliance-summary' AND json.rule = Status equals ""NON_COMPLIANT"" and ComplianceType contains ""Patch"" and ResourceType contains ""ManagedInstance"" and (NonCompliantSummary.SeveritySummary.CriticalCount greater than 0 or NonCompliantSummary.SeveritySummary.HighCount greater than 0)```","AWS Systems Manager EC2 instance having NON_COMPLIANT patch compliance status This policy identifies if the AWS Systems Manager patch compliance status is ""NON_COMPLIANT"" with critical or high severity for managed instances. Instances labeled non-compliant might lack essential patches for security, stability, or meeting standards. Non-compliant instances pose security risks because attackers often target unpatched systems to exploit known weaknesses. As a security best practice, it's recommended to apply any missing patches to the affected instances. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate the non-compliant managed instances please refer to the below URL:\n\nhttps://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-remediation.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Resource equals * and (Action contains kms:* or Action contains kms:Decrypt or Action contains kms:ReEncryptFrom) and Condition does not exist)] exists```,"AWS IAM policy allows decryption actions on all KMS keys This policy identifies IAM policies that allow decryption actions on all KMS keys. Instead of granting permissions for all keys, determine the minimum set of keys that users need to access encrypted data. You should grant to identities only the kms:Decrypt or kms:ReEncryptFrom permissions and only for the keys that are required to perform a task. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: To allow a user to encrypt and decrypt with any CMK in a specific AWS account; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-one-account\n\nTo allow a user to encrypt and decrypt with any CMK in a specific AWS account and Region; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-one-account-one-region\n\nTo allow a user to encrypt and decrypt with specific CMKs; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-specific-cmks." "```config from cloud.resource where api.name = 'azure-dns-recordsets' AND json.rule = type contains CNAME and properties.CNAMERecord.cname contains ""web.core.windows.net"" as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.primaryEndpoints.web exists as Y; filter 'not ($.Y.properties.primaryEndpoints.web contains $.X.properties.CNAMERecord.cname) '; show X;```","Azure DNS Zone having dangling DNS Record vulnerable to subdomain takeover associated with Azure Storage account blob This policy identifies DNS records within an Azure DNS zone that point to Azure Storage Account blobs that no longer exist. A dangling DNS attack happens when a DNS record points to a cloud resource that has been deleted or is inactive, making the subdomain vulnerable to takeover. An attacker can exploit this by creating a new resource with the same name and taking control of the subdomain to serve malicious content. This allows attackers to host harmful content under your subdomain, which could lead to phishing attacks, data breaches, and damage to your reputation. The risk arises because the DNS record still references a non-existent resource, which unauthorized individuals can re-associate with their own resources. As a security best practice, it is recommended to routinely audit DNS zones and remove or update DNS records pointing to non-existing Azure Storage Account blobs. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal and search for 'DNS zones'\n2. Select 'DNS zones' from the search results\n3. Select the DNS zone associated with the reported DNS record\n4. On the left-hand menu, under 'DNS Management,' select 'Recordsets'\n5. Locate and select the reported DNS record\n6. Update or remove the DNS Record if no longer necessary." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'acl.grants[*].grantee contains AuthenticatedUsers'```,"AWS S3 buckets are accessible to any authenticated user This policy identifies S3 buckets accessible to any authenticated AWS users. Amazon S3 allows customer to store and retrieve any type of content from anywhere in the web. Often, customers have legitimate reasons to expose the S3 bucket to public, for example to host website content. However, these buckets often contain highly sensitive enterprise data which if left accessible to anyone with valid AWS credentials, may result in sensitive data leaks. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Public access', Click on 'Any AWS user' and uncheck all items\n6. Click on Save." "```config from cloud.resource where api.name = 'azure-devices-iot-hub-resource' AND json.rule = properties.provisioningState equal ignore case ""Succeeded"" as X; config from cloud.resource where api.name = 'azure-iot-security-solutions' AND json.rule = properties.status equal ignore case ""Enabled"" as Y; filter 'not $.Y.properties.iotHubs contains $.X.id'; show X;```","Azure Microsoft Defender for IoT Hub not enabled This policy identifies Azure IoT Hubs without Microsoft Defender for IoT enabled. Azure IoT Hub is a managed service that acts as a central message hub for communication between IoT applications and IoT devices. Without Microsoft Defender for IoT enabled, IoT devices and hubs are more vulnerable to security threats. This increases the risk of unauthorized access, data breaches, and compromised IoT devices, which can lead to operational and security challenges. As best practice, it is recommended to enable Microsoft Defender for IoT on your Azure IoT Hub. This enhances the security posture of your IoT solutions by providing continuous monitoring, threat detection, and automated response capabilities to protect against cyber threats. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Microsoft Defender for IoT on Azure IoT Hub follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/defender-for-iot/device-builders/quickstart-onboard-iot-hub#enable-defender-for-iot-on-an-existing-iot-hub." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and settings.databaseFlags[?(@.name=='log_min_error_statement')] does not exist""```","GCP PostgreSQL instance database flag log_min_error_statement is not set This policy identifies PostgreSQL database instances in which database flag log_min_error_statement is not set. The log_min_error_statement flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels. log_min_error_statement flag value changes should only be made in accordance with the organization's logging policy. Proper auditing can help in troubleshooting operational problems and also permits forensic analysis. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to set the 'log_min_error_statement' flag for PostgreSQL database as per your organization's logging policy.\n\nTo update the databse flag of GCP PostgreSQL instance, please refer to the URL given below and set log_min_error_statement flag as needed:\nhttps://cloud.google.com/sql/docs/postgres/flags#set_a_database_flag." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-fabric-cluster' AND json.rule = properties.provisioningState equal ignore case Succeeded and ((properties.fabricSettings[*].name does not equal ignore case ""Security"" or properties.fabricSettings[*].parameters[*].name does not equal ignore case ""ClusterProtectionLevel"") or (properties.fabricSettings[?any(name equal ignore case ""Security"" and parameters[?any(name equal ignore case ""ClusterProtectionLevel"" and value equal ignore case ""None"")] exists )] exists))```","Azure Service Fabric cluster not configured with cluster protection level security This policy identifies Service Fabric clusters that are not configured with cluster protection level security. Service Fabric provides levels of protection for node-to-node communication using a primary cluster certificate. It is recommended to set the protection level to ensure that all node-to-node messages are encrypted and digitally signed. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Service Fabric cluster'\n3. Click on the reported Service Fabric cluster\n4. Select 'Custom fabric settings' under 'Settings' from left panel \n5. Make sure a fabric settings in 'Security' section exist with 'ClusterProtectionLevel' property is set to 'EncryptAndSign'.\n\nNote: Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.apiServerAccessProfile.enablePrivateCluster is false and (properties.apiServerAccessProfile.authorizedIPRanges does not exist or properties.apiServerAccessProfile.authorizedIPRanges is empty)```,"Azure AKS cluster configured with overly permissive API server access This policy identifies AKS clusters configured with overly permissive API server access. In Kubernetes, the API server receives requests to perform actions in the cluster such as to create resources or scale the number of nodes. To enhance cluster security and minimize attacks, the API server should only be accessible from a limited set of IP address ranges. These IP ranges allow defined IP address ranges to communicate with the API server. A request made to the API server from an IP address that is not part of these authorized IP ranges is blocked. It is recommended to configure AKS cluster with defined IP address ranges to communicate with the API server. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AKS cluster with defined IP address ranges to communicate with the API server; refer below URL:\nhttps://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-ranges#update-disable-and-find-authorized-ip-ranges-using-azure-portal." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals ""RUNNABLE"" and ipAddresses[?any( type equal ignore case ""PRIMARY"" )] exists and settings.ipConfiguration.authorizedNetworks is empty```","GCP SQL Instance with public IP address does not have authorized network configured This policy identifies GCP Cloud SQL instances with public IP addresses that do not have an authorized network configured. SQL instance can be connected securely by making use of the Cloud SQL Proxy or by adding the client's public addresses as an authorized network to the SQL instance. If the client application is connecting directly to a Cloud SQL instance on its public IP address, the client's external IP address needs to be added as an Authorized network to allow the secure connection. It is recommended to add authorized networks for your SQL instance to minimize the access vector. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If public IP is not needed for the SQL instance, it is recommeded to remove the Public IP from the instance. Any changes to the public IP should be made according to the organization needs and policies.\n\nTo remove the public IP for from a SQL instance, please refer to the URLs given below:\nFor MySQL: https://cloud.google.com/sql/docs/mysql/configure-ip#disable-public\nFor PostgreSQL: https://cloud.google.com/sql/docs/postgres/configure-ip#disable-public\nFor SQL Server: https://cloud.google.com/sql/docs/sqlserver/configure-ip#disable-public\n\nIf it is deemed that instance needs public IP, it is recommended to add restrictive Authorized Networks to limit allowed public connections to the instance.\n\nTo configure authorized networks for a SQL instance, please refer to the URLs given below:\nFor MySQL: https://cloud.google.com/sql/docs/mysql/authorize-networks\nFor PostgreSQL: https://cloud.google.com/sql/docs/postgres/authorize-networks\nFor SQL Server: https://cloud.google.com/sql/docs/sqlserver/authorize-networks." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-image' AND json.rule = iamPolicy.bindings[?any( members contains ""allAuthenticatedUsers"" )] exists```","GCP OS Image is publicly accessible This policy identifies GCP OS Images that are publicly accessible. Custom GCP OS images are user-created operating system images tailored to specific needs and configurations. Making these images public can expose sensitive data, proprietary software, and security vulnerabilities. This can lead to unauthorized access, data breaches, and system exploitation, compromising your infrastructure's security and integrity. It is recommended to keep OS images private unless required for organizational needs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to 'Compute Engine' and then 'Images'\n4. Select the reported image using the check box\n5. Click on the 'PERMISSIONS' tab in the right bar\n6. Filter for 'allAuthenticatedUsers'\n7. Click on the 'Remove principal' button (bin icon)\n8. Select 'Remove allAuthenticatedUsers from all roles on this resource. They may still have access via inherited roles.'\n9. Click 'Remove'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.publicNetworkAccess equal ignore case Enabled and firewallRules.value[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists```,"Azure PostgreSQL Database Server Firewall rule allow access to all IPV4 address This policy identifies Azure PostgreSQL Database Server which has Firewall rule that allow access to all IPV4 address. Having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 would allow access to SQL server from any host on the internet. It is highly recommended not to use this type of firewall rule in any PostgreSQL Database Server. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1.Login to Azure Portal\n2.Click on 'All services' on left Navigation\n3.Click on 'Azure Database for PostgreSQL servers' under Databases\n4.Click on reported server instance\n5.Click on 'Connection security' under Settings\n6.Delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255 under 'Firewall rule name' section\n7.Click on 'Save'." "```config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' as X; config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' as Y; filter "" $.X.sslPolicy does not exist or ($.Y.profile equals COMPATIBLE and $.Y.selfLink contains $.X.sslPolicy) or ( ($.Y.profile equals MODERN or $.Y.profile equals CUSTOM) and $.Y.minTlsVersion does not equal TLS_1_2 and $.Y.selfLink contains $.X.sslPolicy ) or ( $.Y.profile equals CUSTOM and ( $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_GCM_SHA256 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_GCM_SHA384 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_3DES_EDE_CBC_SHA ) and $.Y.selfLink contains $.X.sslPolicy ) ""; show X;```","GCP Load Balancer HTTPS proxy permits SSL policies with weak cipher suites This policy identifies GCP HTTPS Load Balancers that permit SSL policies with weak cipher suites. GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites. To prevent usage of insecure features, SSL policies should use at least TLS 1.2 with the MODERN profile; or the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: The 'GCP default' SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the broadest range of insecure cipher suites and is not modifiable. If this SSL policy is attached to the target HTTPS Proxy Load Balancer, updating the proxy with a more secured SSL policy is recommended.\n\nTo create a new SSL policy, refer to the following URL:\nhttps://cloud.google.com/load-balancing/docs/use-ssl-policies#creating_ssl_policies\n\nTo modify the existing insecure SSL policy attached to the Target HTTPS Proxy:\n1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of page to view target proxies\n5. Go to 'TARGET PROXIES' tab and Click on the reported HTTPS target proxy\n6. Note the 'Load balancer' name.\n7. Click on the hyperlink under 'In use by'\n8. Note the 'External IP address'\n9. Select Load Balancing (Left Panel) and click on the HTTPS load balancer with same name as previously noted 'Load balancer' name.\n10. In frontend section, consider the rule where 'IP:Port' matches the previously noted 'External IP address'.\n11. Click on the 'SSL Policy' of the rule. This will take you to the alert causing SSL policy.\n12. Click on 'EDIT'\n13. Set 'Minimum TLS Version' to TLS 1.2 and set 'Profile' to Modern or Restricted.\n14. Alternatively, if you use the profile 'Custom', make sure that the following features are disabled:\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n15. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING and $.X.status.state does not contain TERMINATED and $.X.status.state does not contain TERMINATED_WITH_ERRORS) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration exists and $.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration.EncryptionKeyProviderType does not equal Custom)'; show X;```,"AWS EMR cluster is not enabled with local disk encryption using Custom key provider This policy identifies AWS EMR clusters that are not enabled with local disk encryption using Custom key provider. Applications using the local file system on each cluster instance for intermediate data throughout workloads, where data could be spilled to disk when it overflows memory. With Local disk encryption at place, data at rest can be protected. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Local disk encryption', check the box 'Enable at-rest encryption for local disks'.\n8. Select 'Custom' Key provider type from the 'Key provider type' dropdown list.\n9. Follow the below link for creating the custom key,\n\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-data-encryption-options.html\n10. Click on 'Create' button.\n11. On the left menu of EMR dashboard Click 'Clusters'.\n12. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n13. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n14. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n15. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n16. Once the new cluster is set up verify its working and terminate the source cluster.\n17. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n18. Click on the 'Terminate' button from the top menu.\n19. On the 'Terminate clusters' pop-up, click 'Terminate'.." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case ""application"" and operating_status equal ignore case ""online"" and pools[?any( protocol does not equal ignore case ""https"" )] exists```","IBM Cloud Application Load Balancer for VPC uses HTTP backend pool instead of HTTPS (SSL & TLS) This policy identifies IBM Cloud Application Load Balancer for VPC, which has been using http backend pools instead of HTTPS. HTTPS pool uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS backend pools for additional security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancer' reported in the alert\n4. Under 'Back-end pools' tab, click on three dots on the right corner of a row containing back-end pool with protocol besides HTTPS.\n5. In the 'Edit back-end pool' screen, under 'Protocol' dropdown, select 'HTTPS'.\n6. Click on 'Save'." "```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = eventSelectors[?any( dataResources[?any( type contains ""AWS::S3::Object"" and values contains ""arn:aws:s3"")] exists and readWriteType is member of (""All"",""ReadOnly"") and includeManagementEvents is true)] exists as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1```","AWS S3 Buckets with Object-level logging for read events not enabled This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-get-stages' AND json.rule = 'clientCertificateId does not exist or clientCertificateId equals None'```,"AWS API Gateway endpoints without client certificate authentication API Gateway can generate an SSL certificate and use its public key in the backend to verify that HTTP requests to your backend system are from API Gateway. This allows your HTTP backend to control and accept only requests originating from Amazon API Gateway, even if the backend is publicly accessible. Note: Some backend servers may not support SSL client authentication as API Gateway does and could return an SSL certificate error. For a list of incompatible backend servers, see Known Issues. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: These instructions assume you already completed Generate a Client Certificate Using the API Gateway Console. If not please generate a client certificate by following below steps and then Configure an API to Use SSL Certificates.\nSteps to Generate a Client Certificate Using the API Gateway Console:\n1. Login to AWS Console\n2. Go to API Gateway console\n3. In the main navigation pane (Left Panel), choose Client Certificates.\n4. From the Client Certificates pane, choose Generate Client Certificate.\n5. Optionally, for Edit, choose to add a descriptive title for the generated certificate and choose Save to save the description. API Gateway generates a new certificate and returns the new certificate GUID, along with the PEM-encoded public key.\n\nSteps to Configure an API to Use SSL Certificates:\n1. Login to AWS Console\n2. Go to API Gateway console\n3. In the API Gateway console, create or open an API for which you want to use the client certificate. Make sure the API has been deployed to a stage (Left Panel).\n4. Choose Stages under the selected API and then choose a stage (Left Panel).\n5. In the Stage Editor panel, select a certificate under the Client Certificate section.\n6. Click Save Changes." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case ""service"" and name equal ignore case ""serviceType"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name equal ignore case ""region"")] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for all Identity and Access enabled services This policy identifies IBM Cloud Users, where policy with administrator role permission across all Identity and Access enabled services. Users with administrator role on All Identity and Access enabled services can access all services or resources in the account. If a user with administrator privilege becomes compromised, it may result in compromised resources in the account. As a security best practice, granting the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions is recommended. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'All Identity and Access enabled services' \n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireSymbols does not exist or requireSymbols is false'```,"Alibaba Cloud RAM password policy does not have a symbol This policy identifies Alibaba Cloud accounts that do not have a symbol in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Symbols'\n6. Click on 'OK'\n7. Click on 'Close'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0```,"GCP API key is created for a project This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion.." "```config from cloud.resource where api.name = 'aws-glue-job' AND json.rule = Command.BucketName exists and Command.BucketName contains ""aws-glue-assets-"" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains ""aws-glue-assets-"" as Y; filter 'not ($.X.Command.BucketName equals $.Y.bucketName)' ; show X;```","AWS Glue Job using the shadow resource bucket for script location This policy identifies that the AWS Glue Job using the bucket for script location is not managed from the current location. This could potentially be using the shadow resource bucket for script location. A shadow resource bucket is an unauthorized S3 bucket posing security risks. AWS Glue is a service utilized to automate the extraction, transformation, and loading (ETL) processes, streamlining data preparation for analytics and machine learning. When a job is created using the Visual ETL tool, Glue automatically creates an S3 bucket with a predictable name pattern 'aws-glue-assets-accountid-region'. An attacker could create the S3 bucket in any region before the victim uses Glue ETL, causing the victims Glue service to write files to the attacker-controlled bucket. This vulnerability allows an attacker to inject any code into the Glue job of the victim, resulting in remote code execution (RCE). It is recommended to verify the expected bucket owner and update the AWS Glue jobs script location and enforce the aws:ResourceAccount condition in the policy of the AWS Glue Job to check that the AWS account ID of the S3 bucket used by AWS Glue Job according to your business requirements. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the script location for an AWS Glue Job:\n\n1. Sign in to the AWS Management Console and open the AWS Glue Studio console at https://console.aws.amazon.com/gluestudio/.\n2. In the navigation pane, choose 'ETL jobs'.\n3. Select the desired AWS Glue Job and choose 'Edit Job' from the 'Actions' drop-down.\n4. In the 'Job Details' window, under 'Advanced properties', verify that the 'Script path' and 'Script filename' are authorized and managed according to your business requirements.\n5. Move the required script to a new S3 bucket as per your requirements.\n6. In the AWS Glue Studio console, go to the 'Job details' tab and update the 'Script filename' and 'Script path' parameters to reflect the new S3 location.\n7. Choose 'Save'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-db-cluster' AND json.rule = status contains available and deletionProtection is false```,"AWS RDS cluster delete protection is disabled This policy identifies RDS clusters for which delete protection is disabled. Enabling delete protection for these RDS clusters prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Amazon RDS dashboard \n4. Click on the DB clusters\n5. Select the reported DB cluster\n6. Click on the 'Modify' button\n7. In Modify DB cluster page, In the 'Additional configuration' section, Check the box 'Enable deletion protection' for Deletion protection.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.KmsMasterKeyId does not exist and attributes.SqsManagedSseEnabled is false```,"AWS SQS Queue not configured with server side encryption This policy identifies AWS SQS queues which are not configured with server side encryption. Enabling server side encryption would encrypt all messages that are sent to the queue and the messages are stored in encrypted form. Amazon SQS decrypts a message only when it is sent to an authorised consumer. It is recommended to enable server side encryption for AWS SQS queues in order to protect sensitive data in the event of a data breach or malicious users gaining access to the data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To configure server side encryption for AWS SQS queue follow below URL as required:\n\nTo configure Amazon SQS key (SSE-SQS) for a queue:\nhttps://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sqs-sse-queue.html\n\nTo configure AWS Key Management Service key (SSE-KMS) for a queue:\nhttps://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case ""PowerState/running"" and ['properties.storageProfile'].['osDisk'].['osType'] contains ""Windows"" and ['properties.securityProfile'].['securityType'] equal ignore case ""TrustedLaunch"" and ['properties.securityProfile'].['uefiSettings'].['secureBootEnabled'] is false```","Azure Virtual Machine (Windows) secure boot feature is disabled This policy identifies Virtual Machines (Windows) that have secure boot feature disabled. Enabling Secure Boot on supported Windows virtual machines provides mitigation against malicious and unauthorised changes to the boot chain. Secure boot helps protect your VMs against boot kits, rootkits, and kernel-level malware. So it is recommended to enable Secure boot for Azure Windows virtual machines. NOTE: This assessment only applies to trusted launch enabled Windows virtual machines. You can't enable trusted launch on existing virtual machines that were initially created without it. To know more, refer https://docs.microsoft.com/azure/virtual-machines/trusted-launch?WT.mc_id=Portal-Microsoft_Azure_Security This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Select 'Configuration' under 'Settings' from left panel \nNOTE: Enabling Secure Boot will trigger an immediate SYSTEM REBOOT.\n5. On the 'Configuration' page, check 'Secure boot' under 'Security type' section\n6. Click 'Save'." "```config from cloud.resource where api.name = 'aws-neptune-db-cluster' AND json.rule = Status equals ""available"" as X; config from cloud.resource where api.name = 'aws-neptune-db-cluster-parameter-group' AND json.rule = parameters.neptune_enable_audit_log.ParameterValue exists and parameters.neptune_enable_audit_log.ParameterValue equals 0 as Y; filter '($.X.EnabledCloudwatchLogsExports.member does not contain ""audit"") or $.X.DBClusterParameterGroup equals $.Y.DBClusterParameterGroupName' ; show X;```","AWS Neptune DB cluster does not publish audit logs to CloudWatch Logs This policy identifies Amazon Neptune DB clusters where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. Neptune DB integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While Neptune DB provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the Neptune DB cluster to enable audit logs and publish audit logs to CloudWatch logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. Choose 'Create'. The 'Create cluster parameter group' window appears. \n4. In the 'Parameter group family' list, select a 'DB parameter group family'.\n5. In the 'Parameter group type', Select 'DB cluster parameter group'.\n6. In the 'New cluster parameter group name', enter the name of the new DB cluster parameter group. \n7. In the Description box, enter a description for the new DB cluster parameter group. \n8. Click 'Create'. \n\nTo modify the custom DB cluster parameter group to enable query logging, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify. \n4. Change the value of the 'neptune_enable_audit_log' parameter to '1' in the value drop-down and click on tick mark for enabling audit logs.\n\nTo modify an Amazon Neptune DB Cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster. \n4. Under 'Additional settings', select the above-created cluster parameter group from the DB parameter group dropdown. \n5. Choose 'Continue' and check the summary of modifications. \n6. On the confirmation page, review your changes. If they are correct, choose 'Modify cluster' to save your changes. \n\nTo modify an Amazon Neptune DB cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster.\n4. Scroll down to the Log exports section, and choose 'Enable' for the 'Audit logs'.\n5. Choose 'Continue'.\n6. Choose 'Modify cluster'.." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains ""resource.type ="" or $.X.filter contains ""resource.type="") and ($.X.filter does not contain ""resource.type !="" and $.X.filter does not contain ""resource.type!="") and $.X.filter contains ""iam_role"" and ($.X.filter contains ""protoPayload.methodName="" or $.X.filter contains ""protoPayload.methodName ="") and ($.X.filter does not contain ""protoPayload.methodName!="" and $.X.filter does not contain ""protoPayload.methodName !="") and $.X.filter contains ""google.iam.admin.v1.CreateRole"" and $.X.filter contains ""google.iam.admin.v1.DeleteRole"" and $.X.filter contains ""google.iam.admin.v1.UpdateRole""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for IAM custom role changes This policy identifies the GCP account which does not have a log metric filter and alert for IAM custom role changes. Monitoring role creation, deletion and updating activities will help in identifying over-privileged roles at early stages. It is recommended to create a metric filter and alarm to detect activities related to the creation, deletion and updating of custom IAM Roles. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type=""iam_role"" AND protoPayload.methodName = ""google.iam.admin.v1.CreateRole"" OR protoPayload.methodName=""google.iam.admin.v1.DeleteRole"" OR protoPayload.methodName=""google.iam.admin.v1.UpdateRole""\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and (config.minTlsVersion equals ""1.0"" or config.minTlsVersion equals ""1.1"")```","Azure Logic app using insecure TLS version This policy identifies Azure Logic apps that are using insecure TLS version. Azure Logic apps configured to use insecure TLS versions are at risk as they may be vulnerable to security threats due to the known vulnerabilities, weaker encryption methods, and support for compromised hash functions. Logic apps using TLS 1.2 or higher will secure communication and protect against potential cyber attacks. As a security best practice, it is recommended to configure Logic apps with TLS 1.2 or higher to ensure secure communication. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Set 'Minimum Inbound TLS Version' to '1.2' or higher.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.supportsHttpsTrafficOnly is true and properties.minimumTlsVersion does not equal TLS1_2```,"Azure Storage Account using insecure TLS version This policy identifies Azure Storage Account which is using insecure TLS version. Azure Storage Account uses Transport Layer Security (TLS) from communication with client applications. As a best security practice, use newer TLS version as the minimum TLS version for Azure Storage Account. Currently, Azure Storage Account supports TLS 1.2 version which resolves the security gap from its preceding versions. https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version?tabs=portal This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on the reported storage account\n3. Under the 'Settings' menu, click on 'Configuration'\n4. Under 'Minimum TLS version' select 'Version 1.2' from the drop down\n5. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and (nodeToNodeEncryptionOptions.enabled is false or nodeToNodeEncryptionOptions.enabled does not exist)```,"AWS OpenSearch node-to-node encryption is disabled This policy identifies AWS OpenSearch for which none-to-node encryption is disabled. Each OpenSearch domain resides within a dedicated VPC and, by default, traffic within the VPC is unencrypted. Enabling node to node encryption provides additional security layer by making use of TLS encryption for all communications between Amazon OpenSearch Service instances in a cluster. For more information, please follow the URL given below, https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ntn.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Go to https://aws.amazon.com, and then choose Sign In to the Console\n1. Under Analytics, choose Amazon OpenSearch Service\n2. Choose your domain\n3. Choose Actions, Edit security configuration\n4. Under Encryption section, check Node-to-node encryption\n5. Click Save changes button\n\nFor more details on node-to-node encryption for an Amazon OpenSearch Service Domain, follow below mentioned URL:\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/ntn.html\n\nNote: Node-to-node encryption is supported only from OpenSearch 6.0 or later. To upgrade older versions of AWS OpenSearch please refer to the URL given below,\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/version-migration.html." "```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains ""protoPayload.methodName ="" or $.X.filter contains ""protoPayload.methodName="") and ($.X.filter does not contain ""protoPayload.methodName !="" and $.X.filter does not contain ""protoPayload.methodName!="") and $.X.filter contains ""SetIamPolicy"" and $.X.filter contains ""protoPayload.serviceData.policyDelta.auditConfigDeltas:*""'; show X; count(X) less than 1```","GCP Log metric filter and alert does not exist for Audit Configuration Changes This policy identifies the GCP accounts which do not have a log metric filter and alert for Audit Configuration Changes. Configuring metric filter and alerts for Audit Configuration Changes ensures recommended state of audit configuration and hence, all the activities in project are audit-able at any point in time. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nprotoPayload.methodName=""SetIamPolicy"" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:*\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = identity does not exist or identity.type equal ignore case ""None""```","Azure Automation account is not configured with managed identity This policy identifies Automation accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Automation account. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable managed identity on an existing Azure Automation account, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/automation/quickstarts/enable-managed-identity." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'```,"Bobby Copy of AWS Redshift database does not have audit logging enabled Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE```,"AWS Lambda function URL AuthType set to NONE This policy identifies AWS Lambda functions which have function URL AuthType set to NONE. AuthType determines how Lambda authenticates or authorises requests to your function URL. When AuthType is set to NONE, Lambda doesn't perform any authentication before invoking your function. It is highly recommended to set AuthType to AWS_IAM for Lambda function URL to authenticate via AWS IAM. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'\n ." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy xzypd This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""logdnaat"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""logGroup"",""resourceType"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Activity Tracker Service This policy identifies IBM Cloud users with overly permissive Activity Tracker Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Activity Tracker ' Service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ""(publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.ignorePublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicPolicy is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicPolicy is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.restrictPublicBuckets is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))""```","Copy of AWS S3 Buckets Block public access setting disabled This policy identifies AWS S3 buckets which have 'Block public access' setting disabled. Amazon S3 provides 'Block public access' setting to manage public access of AWS S3 buckets. Enabling 'Block public access' setting prevents S3 resource data being accidentally or maliciously becoming publicly accessible. It is highly recommended to enable 'Block public access' setting for all AWS s3 buckets appropriately. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Block public access' click on 'Edit'\n6. Select 'Block all public access' checkbox\n7. Click on Save\n8. 'Confirm' the changes\n\nNote: Make sure updating 'Block public access' setting does not affect S3 bucket data access.." "```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as X; config from cloud.resource where api.name = 'gcloud-cloud-function-v2' AND json.rule = status equals ACTIVE and serviceConfig.serviceAccountEmail contains ""compute@developer.gserviceaccount.com"" as Y; filter ' $.X.user equals $.Y.serviceConfig.serviceAccountEmail '; show Y;```","GCP Cloud Run function is using default service account with editor role This policy identifies GCP Cloud Run functions that are using the default service account with the editor role. GCP Compute Engine Default service account is automatically created upon enabling the Compute Engine API. This service account is granted the IAM basic Editor role by default, unless explicitly disabled. Assigning default service account with the editor role to cloud run functions could lead to privilege escalation. Granting minimal access rights helps in promoting a better security posture. Following the principle of least privileges, it is recommended to avoid assigning default service account with the editor role to cloud run functions. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Cloud Run functions' service\n3. Click on the name of the cloud run function on which alert is generated\n4. Click 'EDIT' at top\n5. Expand 'Runtime, build, connections and security settings' and select 'RUNTIME' tab\n6. Under 'Runtime service account', select an appropriate 'Service account' using the dropdown\n7. Click 'NEXT' at bottom\n8. Click 'DEPLOY' at bottom." "```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-instance' AND json.rule = status equal ignore case ""running"" AND network_interfaces[?any( allow_ip_spoofing is true )] exists```","IBM Cloud Virtual Servers for VPC instance has interface with IP-spoofing enabled This policy identifies IBM Cloud Virtual Servers for VPC instances which has any interfaces with IP-spoofing enabled. If any interface has IP-spoofing enabled, there is a chance of bad actors trying to modify the source address in IP packets to invoke a DDoS attack. It is recommended that IP-spoofing is disabled for all interfaces of a virtual server for VPC This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instances'\n3. Select the 'Virtual server instance' reported in the alert \n4. Under 'Network interfaces' tab, click on edit icon and set 'Allow IP spoofing' to disabled for each network interface. \n5. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"Critical - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." ```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 90) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 90) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)))'```,"Critical - AWS access keys not used for more than 90 days This policy identifies IAM users for which access keys are not used for more than 90 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 90 days, then that access key needs to be deleted (even though the access key is inactive). This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and not(customerManagedKey contains cryptoKeys)```,"rgade-config-policy-01-28-2025 rgade-config-policy-01-28-2025 This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith sagemaker:)] exists```,"AWS SageMaker notebook instance IAM policy overly permissive to all traffic This policy identifies SageMaker notebook instances IAM policies that are overly permissive to all traffic. It is recommended that the SageMaker notebook instances should be granted access restrictions so that only authorized users and applications have access to the service. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_id-based-policy-examples.html This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'SageMaker' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.." "```config from cloud.resource where api.name = 'ibm-event-streams-instance' AND json.rule = resource_plan_id is not member of ('ibm.eventstreams.lite', 'ibm.eventstreams.standard' ) as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```","IBM Cloud Event Stream is not encrypted with customer-managed key This policy identifies IBM Cloud Event streams that are not encrypted with a customer-managed key. The customer-managed key allows customers to ensure no one outside their organization has access to the key. And customers will have control over the lifecycle of their customer root keys where they can create, rotate, and delete those keys. As a security best practice, it is recommended to use a customer-managed key, which provides a significant level of control over the keys when used for encryption. Note: This policy applies to Enterprise plan Event streams only. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: An Event stream can be encrypted with customer-managed keys only at the time of creation. Please follow the below instructions to encrypt an event stream with customer-managed keys while creating a new event stream.\n\n1. Log in to the IBM Cloud Console.\n2. Click on 'Catalog' on the title bar.\n3. Select 'Event Streams' from the list of products, and in the create page select the pricing plan as 'Enterprise'.\n4. Under the 'Encryption' section, select a key protect instance under the 'Select a Key Management Service instance' dropdown.\n5. Under the 'Select a disk encryption key' dropdown, select a key other than the Automatic disk encryption key.\n6. Select other configurations as per the requirements.\n7. Click on 'Create'.\n\nMake sure to transfer all the configurations/connections to the newly created Event stream before deleting the non-encrypted Event stream. Delete the vulnerable Event stream using the below instructions:\n\n1. Log in to the IBM Cloud Console.\n2. Go to Menu > 'Resource List', From the 'Integration' section, select the reported event stream.\n3. Click on 'Actions' button, then click on 'Delete service'.\n4. Click on 'OK' to confirm.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and serviceConfig.ingressSettings equals ALLOW_ALL```,"GCP Cloud Function with overly permissive network ingress settings This policy identifies GCP Cloud Functions that have overly permissive network ingress settings. This includes both Cloud Functions v1 and Cloud Functions v2. Ingress settings control whether resources outside of your Google Cloud project or VPC Service Controls perimeter can invoke a function. With overly permissive ingress setting, all inbound requests to invoke function are allowed, both from the public and from resources within the same project. Restrictive network ingress settings for cloud functions in GCP minimize the risk of unauthorized access and attacks by limiting inbound traffic to trusted sources. This approach enhances security, prevents malicious activities, and ensures only legitimate traffic reaches your applications. It is recommended to restrict the public traffic and allow traffic from VPC networks in the same project or traffic through the Cloud Load Balancer. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings' drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. In 'Ingress settings', select either 'Allow internal traffic only' or 'Allow internal traffic and traffic from Cloud Load Balancing'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.uniformBucketLevelAccess.enabled contains false```,"GCP cloud storage bucket with uniform bucket-level access disabled This policy identifies GCP storage buckets for which the uniform bucket-level access is disabled. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. log in to GCP Console\n2. Navigate to 'Storage'\n3. Click on 'Browser' to get the list of storage buckets\n4. Search for the alerted bucket and click on the bucket name\n5. From the top menu go to 'PERMISSION' tab\n6. Under the section 'Access control' click on 'SWITCH TO UNIFORM'\n7. On the pop-up window select 'uniform'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-msk-cluster' AND json.rule = state equal ignore case active and encryptionInfo.encryptionInTransit.clientBroker contains PLAINTEXT or encryptionInfo.encryptionInTransit.inCluster is false```,"AWS MSK cluster encryption in transit is not enabled This policy identifies AWS Managed Streaming for Apache Kafka clusters having in-transit encryption in a disabled state. In-transit encryption secures data while it's being transferred between brokers. Without it, there's a risk of data interception during transit. It is recommended to enable in-transit encryption among brokers within the cluster. This ensures that all data exchanged within the cluster is encrypted, effectively protecting it from potential eavesdropping and unauthorized access. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable in-transit encryption both within the cluster and client broker communication has to be configured with TLS.\n\nTo enable TLS encryption for client-broker communication, follow the below steps:\n1. Sign in to the AWS Management Console and open the Amazon MSK console at https://console.aws.amazon.com/msk/.\n2. On the navigation menu, choose 'Clusters', and select the MSK cluster for which you want to enable or edit in-transit encryption.\n3. Under the 'Actions' dropdown, select 'Edit security settings'. \n4. Under 'Encryption', please uncheck the 'Plaintext' option and make sure the 'TLS encryption' option is selected for  'Between clients and brokers' encryption configuration.\n5. Click on 'Update' to save changes.\n\nEnabling TLS encryption for within-cluster communication involves creating a new cluster. To create a new cluster, please follow the below steps:\n1. Sign in to the AWS Management Console and open the Amazon MSK console at https://console.aws.amazon.com/msk/.\n2. On the navigation menu, choose 'Clusters', then select 'Create cluster'.\n3. Under the 'Create Cluster' page, please configure the cluster as per the requirements.\n4. At Step 3, under 'Encryption', select 'TLS encryption' for the 'Between clients and brokers' checkbox.\n5. Select 'TLS encryption' for the 'Within the cluster' checkbox.\n6. After providing the required configuration in the remaining steps, Under step 5, click on 'Create cluster'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus contains available and dbclusterIdentifier does not exist and deletionProtection is false```,"AWS RDS instance delete protection is disabled This policy identifies RDS instances for which delete protection is disabled. Enabling delete protection for these RDS instances prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Amazon RDS dashboard \n4. Click on the DB instances\n5. Select the reported DB instance\n6. Click on the 'Modify' button\n7. In Modify DB instance page, In the 'Additional configuration' section, Check the box 'Enable deletion protection' for Deletion protection.." ```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'password_enabled is true and (access_key_1_active is true or access_key_2_active is true)'```,"AWS IAM user has both Console access and Access Keys This policy identifies IAM users who have both Console access and Access Keys. When an IAM user is created, the Administrator can assign either Console access or Access Keys or both. Ideally the Console access should be assigned to Users and Access Keys for system / API applications, but not both to the same IAM user. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Identify the reported IAM user.\n3. In 'Security credentials' tab check for presence of Access Keys.\n4. Based on the requirement and company policy, either delete the Access Keys or remove the Console access for this IAM user.." ```config from cloud.resource where api.name = 'aws-cloudwatch-log-group' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.kmsKeyId does not exist ) or ($.X.kmsKeyId exists and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```,"AWS CloudWatch Log groups not encrypted by Customer Managed Key (CMK) This policy identifies AWS CloudWatch Log groups that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using a CMK that is disabled. A CloudWatch Log Group is a collection of log streams that share the same retention, monitoring, and access control settings. Encrypting with a Customer Managed Key (CMK) provides additional control over key rotation, management, and access policies compared to the default encryption. As a security best practice, using CMK to encrypt your CloudWatch Log Groups is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the encryption key for a AWS CloudWatch Log groups:\n\nUse the associate-kms-key command as follows:\n\naws logs associate-kms-key --log-group-name --kms-key-id \n\nNote: When using customer-managed CMKs to encrypt AWS CloudWatch Log groups, Ensure authorized entities have access to the key and its associated operations.." "```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-vm-list' AND json.rule = powerState contains ""PowerState/running"" and ['properties.networkProfile'].['networkInterfaces'][*].['publicIpAddressId'] exists and ['properties.diagnosticsProfile'].['bootDiagnostics'].['enabled'] is true```","Azure Virtual machine configured with public IP and serial console access This policy identifies Azure Virtual machines with public IP configured with serial console access (via Boot diagnostic setting). The Microsoft Azure serial console feature provides access to a text-based console for virtual machines (VMs) running either Linux or Windows. Serial Console connects to the ttyS0 or COM1 serial port of the VM instance, providing access independent of the network or operating system state. Attacker can leverage public IP assigned Serial console enabled virtual machine for remote code execution and privilege escalation. It is recommended to restrict public access to the reported virtual machine and disable/restrict serial console access. NOTE: Serial Console can be disabled for an individual Virtual machine instance by boot diagnostics only. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable/restrict serial console access on the reported VM instance, follow bellow URL:\nhttps://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-enable-disable." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains iam.gserviceaccount.com AND (roles[*] contains admin or roles[*] contains Admin or roles[*] contains roles/editor or roles[*] contains roles/owner)```,"GCP IAM Service account has admin privileges This policy identifies service accounts which have admin privileges. Application uses the service account to make requests to the Google API of a service so that the users aren't directly involved. It is recommended not to use admin access for ServiceAccount. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1.Login to GCP Portal\n2.Goto IAM & admin (Left panel)\n3.Choose the reported member and click on the edit icon\n4.Delete the Admin role and provide appropriate role according to requirement.\n5.Click Save." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals RUNNABLE and databaseVersion contains POSTGRES and ( settings.databaseFlags[?any( name equals ""log_statement"" )] does not exist or settings.databaseFlags[?any( name equals ""log_statement"" and value equals ""all"" or value equals ""none"" )] exists)```","GCP PostgreSQL instance database flag log_statement is not set appropriately This policy identifies PostgreSQL database instances in which database flag log_statement is not set appropriately. If log_statement is not set to a correct value may lead to too many statements or too few statements. Setting log_statement to align with your organization's security and logging policies facilitates later auditing and review of database activities. It is recommended to choose an appropriate value (ddl or mod) for the flag log_statement. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_statement' from the drop-down menu and set the value as ddl or mod\nOR\nIf the flag has been set to other than ddl or mod, Under 'Customize your instance', In 'Flags' section choose the flag 'log_statement' and set the value as ddl or mod\n6. Click on 'DONE' and then 'SAVE'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/iam.serviceAccountActor) or (roles[*] contains roles/iam.serviceAccountUser) or (roles[*] contains roles/iam.serviceAccountTokenCreator)'```,"GCP IAM user with service account privileges Checks to ensure that IAM users don't have service account privileges. Adding any user as service account actor will enable these users to have service account privileges. Adding only authorized corporate IAM users as service account actors will make sure that your information is secure. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM \n4. From the list of users, identify the users with Service Account Actor, Service Account User or Service Account Token Creator roles\n5. Remove these user roles by clicking on Delete icon for any unauthorized user." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-msk-cluster' AND json.rule = brokerNodeGroupInfo.connectivityInfo.publicAccess.type does not equal ""DISABLED""```","AWS MSK cluster public access is enabled This policy identifies the Amazon Managed Streaming for Apache Kafka (Amazon MSK) Cluster is configured with public access enabled. Amazon MSK provides the capability to enable public access to the brokers of MSK clusters. When the AWS MSK Cluster is configured for public access, there is a potential risk of data being exposed to the public. To mitigate the risk of unauthorized access and to adhere to compliance requirements, it is advisable to disable public access on the AWS MSK cluster. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Management Console, and open the Amazon MSK console at https://console.aws.amazon.com/msk/home.\n2. In the Navigation panel, select 'Clusters' under the 'MSK Clusters' section.\n3. Click on the cluster that is reported.\n4. Choose the 'Properties' tab.\n5. In the 'Network settings' section, click on the 'Edit' dropdown.\n6. Choose 'Edit public access'.\n7. In the 'Edit public access' dialog, uncheck the 'Public access' checkbox to disable public access.\n8. Click 'Save changes' to apply the changes.." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy pkgmu This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" AND shieldedInstanceConfig.enableSecureBoot is false```","GCP Vertex AI Workbench Instance has Secure Boot disabled This policy identifies GCP Vertex AI Workbench instances with Secure Boot disabled. Secure Boot is a security feature that ensures only trusted, digitally signed software runs during the boot process, protecting against advanced threats such as rootkits and bootkits. By verifying the integrity of the bootloader and operating system, Secure Boot prevents unauthorized software from compromising the system at startup. Without Secure Boot, instances are vulnerable to persistent malware and unauthorized code that could compromise the system deeply. It is recommended to enable Secure Boot for Vertex AI Workbench instances. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on Secure Boot'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-rds-describe-db-instances' AND json.rule= 'engine is not member of (""sqlserver-ex"") and dbinstanceStatus equals available and dbiResourceId does not equal null' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.storageEncrypted does not exist or $.X.storageEncrypted is false or ($.X.storageEncrypted is true and $.X.kmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```","AWS RDS instance is not encrypted This policy identifies AWS RDS instances which are not encrypted. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up and manage databases. Amazon allows customers to turn on encryption for RDS which is recommended for compliance and security reasons. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Amazon RDS instance can only be encrypted at the time of DB instance creation. So to resolve this alert, create a new DB instance with encryption and then migrate all required DB instance data from the reported DB instance to this newly created DB instance.\nTo create RDS DB instance with encryption, follow the instructions mentioned in below reference link based on your Database vendor:\nhttp://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = 'policyAvailable is true and denyUnencryptedUploadsPolicies[*] is empty and sseAlgorithm equals None'```,"AWS S3 buckets do not have server side encryption Customers can protect the data in S3 buckets using the AWS server-side encryption. If the server-side encryption is not turned on for S3 buckets with sensitive data, in the event of a data breach, malicious users can gain access to the data. NOTE: Do NOT enable this policy if you are using 'Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C).' This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service\n2. Click on the reported S3 bucket\n3. Click on the 'Properties' tab\n4. Under the 'Default encryption' section, choose encryption option either AES-256 or AWS-KMS based on your requirement.\nFor more information about Server-side encryption,\nDefault encryption:\nhttps://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html\nPolicy based encryption:\nhttps://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html." "```config from cloud.resource where api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = primary.state does not equal ""ENABLED"" as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' $.X.name equals $.Y.encryption.defaultKmsKeyName'; show Y;```","GCP Storage bucket using a disabled CMEK This policy identifies GCP Storage buckets that are using a disabled CMEK. CMEK (Customer-Managed Encryption Keys) for GCP buckets allows you to use your own encryption keys to secure data stored in Google Cloud Storage. If a CMEK defined for a GCP bucket is disabled, the data in that bucket becomes inaccessible, as the encryption keys are no longer available to decrypt the data. This can lead to data loss and operational disruption. If not properly managed, CMEK can also introduce risks such as accidental key deletion or mismanagement, which could compromise data availability and security. It is recommended to review the state of CMEK and enable it to keep the data in the bucket accessible. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate Cloud Storage Buckets page\n3. Click on the reported bucket\n4. Go to 'Configuration' tab\n5. Under 'Default encryption key', click on the key name\n6. Select the appropriate key version\n7. Click 'ENABLE'and then click 'ENABLE' in the pop up." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-step-functions-statemachine' AND json.rule = loggingConfiguration.level does not equal ignore case ""ALL""```","AWS Step Function state machines logging disabled This policy identifies AWS Step Function state machines with logging disabled. AWS Step Functions uses state machines to define and execute workflows that coordinate the components of distributed applications and microservices. Step Functions logs state machine executions to Amazon CloudWatch Logs for debugging and monitoring purposes. It is recommended to enable logging on the Step Function state machine to maintain reliability, availability, and performance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on the Step Function state machine, follow the below steps:\n\n1. Log into the AWS console and navigate to the Step Function dashboard\n2. On the state machine page, select the reported state machine\n3. Click on 'Edit'\n4. Select 'Config' to edit the configuration\n5. Under the 'Config' tab, under the 'Logging' section, set 'Log level' to 'ALL'\n6. Click on 'Save'.." "```config from cloud.resource where api.name = 'aws-connect-instance' AND json.rule = InstanceStatus equals ""ACTIVE"" and attributes[?any( AttributeType equals ""CONTACTFLOW_LOGS"" and Value equals ""false"" )] exists```","AWS Connect instance not configured with contact flow logs This policy identifies the Amazon Connect instance configured with CONTACTFLOW_LOGS set to false. In Amazon Connect, Enabling CONTACTFLOW_LOGS in Amazon Connect is crucial as it allows real-time logging of contact flow executions to CloudWatch. This helps debug, monitor, and optimize customer interactions by tracking steps, conditions, and errors. It is recommended to enable CONTACTFLOW_LOGS to enhance monitoring and ensure adherence to security policies and regulations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for AWS Connect instance, perform the following actions:\n1. Sign into AWS console and Open the Amazon Connect console at https://console.aws.amazon.com/connect/.\n2. On the instances page, choose the instance alias that is reported.\n3. In the navigation pane, choose 'Flows'.\n4. Navigate to the Flow logs section and select 'Enable Flow logs' and choose 'Save'.\nNote: Logs are generated only for flows that include a 'Set logging behavior block' with logging set to enabled.." ```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = 'destination.bucket exists' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' AND json.rule = (retentionPolicy.isLocked does not exist or retentionPolicy.isLocked is false) as Y; filter '($.X.destination.bucket contains $.Y.name)'; show Y;```,"GCP Log bucket retention policy is not configured using bucket lock This policy identifies GCP log buckets for which retention policy is not configured using bucket lock. It is recommended to configure the data retention policy for cloud storage buckets using bucket lock to permanently prevent the policy from being reduced or removed in case the system is compromised by an attacker or a malicious insider. Note: Locking a bucket is an irreversible action. Once you lock a bucket, you cannot remove the retention policy from the bucket or decrease the retention period for the policy. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a retention policy on a bucket, please refer to the URL given below:\nhttps://cloud.google.com/storage/docs/using-bucket-lock#set-policy\n\nTo lock a bucket, please refer to the URL given below:\nhttps://cloud.google.com/storage/docs/using-bucket-lock#lock-bucket." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'user does not equal and _DateTime.ageInDays(user_creation_time) > 30 and (password_last_used equals N/A or password_last_used equals no_information or _DateTime.ageInDays(password_last_used) > 30) and ((access_key_1_last_used_date equals N/A or _DateTime.ageInDays(access_key_1_last_used_date) > 30) and (access_key_2_last_used_date equals N/A or _DateTime.ageInDays(access_key_2_last_used_date) > 30))'```,"AWS Inactive users for more than 30 days This policy identifies users who are inactive for more than 30 days. Inactive user accounts are an easy target for attacker because any activity on the account will largely get unnoticed. NOTE: Exception to this policy is, it is not valid for SSO login users and Root users This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: 1.Sign in to AWS console and navigate to IAM.\n2.Identify the user reported and Make sure that the user has legitimate reason to be inactive for such an extended period.\n3. Delete the user account, if the user no longer needs access to the console or no longer exists.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= 'sourceRanges[*] contains 0.0.0.0/0 and allowed[?any(ports contains _Port.inRange(25,25) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)) )] exists'```","harish-GCP Firewall rule allows all traffic on SMTP port (25) This policy identifies GCP Firewall rules which allow all inbound traffic on SMTP port (25). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SMTP port (25) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cdn-endpoint' AND json.rule = properties.customDomains[?any( properties.customHttpsProvisioningState does not equal Enabled )] exists```,"Azure CDN Endpoint Custom domains is not configured with HTTPS This policy identifies Azure CDN Endpoint Custom domains which has not configured with HTTPS. Enabling HTTPS would allow sensitive data to be delivered securely via TLS/SSL encryption when it is sent across the internet. It is recommended to enable HTTPS in Azure CDN Endpoint Custom domains which will provide additional security and protects your web applications from attacks. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to 'CDN profiles'\n3. Choose the reported each 'CDN Endpoints' under each 'CDN profiles'\n4. Under 'Settings' section, Click on 'Custom domains'\n5. Select the 'Custom domain' for which you need to enable HTTPS\n6. Under 'Configure' select 'On' for 'Custom domain HTTPS'\n7. Select 'Certificate management type' and 'Minimum TLS version'\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-route53-domain' AND json.rule = statusList[*] does not contain ""clientTransferProhibited""```","AWS Route53 Domain transfer lock is not enabled This policy identifies the AWS Route53 domain, which is not enabled with transfer lock. Route 53 Domain Transfer Lock is a security feature that prevents unauthorised domain transfers by locking the domain at the registrar level. The feature sets the ""clientTransferProhibited"" flag, which is a registry setting enabled by the registrar to force all transfer requests to be rejected automatically. If Route 53 Domain Transfer Lock is disabled, your domain is vulnerable to unauthorized transfers, which can lead to service disruptions, data breaches, reputational damage, and financial loss. It is recommended to enable Route 53 Domain Transfer Lock to prevent unauthorized domain transfers and protect your domain from potential security threats and disruptions. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To lock a domain to prevent unauthorized transfer to another registrar, perform the following actions:\n\n1. Sign in to the AWS Management Console and open the Route 53 console at https://console.aws.amazon.com/route53/.\n2. In the navigation pane, choose 'Registered Domains'.\n3. Choose the name of the domain that is reported.\n4. On the 'Details' section, in the 'Actions' dropdown, choose 'Turn on transfer lock' to turn the transfer lock on.\n5. You can navigate to the 'Requests' page to see the progress of your request.." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(53,53)""```","Alibaba Cloud Security group allow internet traffic to DNS port (53) This policy identifies Security groups that allow inbound traffic on DNS port (53) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 53, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'." ```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' as X; config from cloud.resource where api.name = 'aws-ec2-describe-subnets' as Y; filter 'not $.X.vpcId equals $.Y.vpcId'; show X;```,"AWS VPC not in use This policy identifies VPCs which are not in use. These VPC resources might be unintentionally launched and AWS also imposes a limit to the number of VPCs allowed per region. So it is recommended to either delete or use effectively such VPCs that do not have resources attached to them. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Your VPCs' and Choose the reported VPC\n5. If you want use reported VPC, Associate subnets to VPC or If you want to delete VPC, Click on 'Actions' and Choose 'Delete VPC' from the dropdown." ```config from cloud.resource where api.name = 'alibaba-cloud-action-trail' AND json.rule = ossBucketName equals 42```,"Tamir policy This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-lightsail-instance' AND json.rule = state.name contains ""running"" and networking.ports[?any( accessDirection equals inbound and (cidrs contains ""0.0.0.0/0"" or ipv6Cidrs contains ""::/0"") and (((toPort == 22 or fromPort == 22) or (toPort > 22 and fromPort < 22)) or ((toPort == 3389 or fromPort == 3389) or (toPort > 3389 and fromPort < 3389))))] exists```","AWS Lightsail Instance does not restrict traffic on admin ports This policy identifies the AWS Lightsail instance having network rule with unrestricted access (""0.0.0.0/0"" or ""::/0"") on port 22 or 3389. The firewall in Amazon Lightsail manages inbound traffic permitted to connect to your instance via its public IP address, controlling access to specific IPs and ports. Leaving administrative ports open to unrestricted access increases the risk of unauthorized access, such as brute-force attacks, which can compromise the instance and expose sensitive data. It is recommended to limit access to specific IP addresses in the firewall rules to reduce unauthorized access attempts. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict the traffic on the AWS Lighsail instance firewall rule to known IP/CIDR range, perform the following actions:\n\n1. Sign in to the Lightsail console https://lightsail.aws.amazon.com/.\n2. In the left navigation pane, choose Instances.\n3. Choose the reported instance.\n4. Choose the Networking tab on your instance's management page.\n5. Click on the Edit icon on the rule contains the unrestricted access (""0.0.0.0/0"" or ""::/0"") on port 22 or 3389 under the 'IPv4 Firewall' section or 'IPv6 firewall'\n6a. Click on 'Restrict to IP address' to update Source IP address to the Trusted CIDR range\nor \n6b. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 or 3389 or (range containing 3389 or 22) by clicking delete icon.\n\nNote: Before making any changes, please check the impact on your applications/services.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (elasticsearchClusterConfig.dedicatedMasterEnabled is false or elasticsearchClusterConfig.dedicatedMasterEnabled does not exist)'```,"AWS Elasticsearch domain has Dedicated master set to disabled This policy identifies Elasticsearch domains for which Dedicated master is disabled in your AWS account. If dedicated master nodes are provided those handle the management tasks and cluster nodes can easily manage index and search requests from different types of workload and make them more resilient in production. Dedicated master nodes improve environmental stability by freeing all the management tasks from the cluster data nodes. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Click on 'Edit Domain'\n6. On the 'Edit domain' page,\n a. Check 'Enable dedicated master' checkbox to enable dedicated master nodes for the current cluster.\n b. Select the 'Instance type' based on your ES cluster requirements from the dropdown list.\n Note: As dedicated master nodes do not hold any data nor process any search and query requests, the instance node for this role typically does not require a large amount of CPU/RAM memory.\n c. Select the 'Number of master nodes' from dropdown list to allocate dedicated master nodes.\n7. Click on 'Submit'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule = 'requireNumbers contains false and requireSymbols contains false and expirePasswords contains false and allowUsersToChangePassword contains false and requireLowercaseCharacters contains false and requireUppercaseCharacters contains false and maxPasswordAge does not exist and passwordReusePrevention does not exist and minimumPasswordLength==6'```,"Copy of AWS IAM Password policy is unsecure Checks to ensure that IAM password policy is in place for the cloud accounts. As a security best practice, customers must have strong password policies in place. This policy ensures password policies are set with all following options: - Minimum Password Length - At least one Uppercase letter - At least one Lowercase letter - At least one Number - At least one Symbol/non-alphanumeric character - Users have permission to change their own password - Password expiration period - Password reuse - Password expiration requires administrator reset This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'IAM' Service\n2. Click on 'Account Settings'\n3. Under 'Password Policy', select and set all the options\n4. Click on 'Apply password policy'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'network equals default'```,"GCP Kubernetes Engine Clusters using the default network This policy identifies Google Kubernetes Engine (GKE) clusters that are configured to use the default network. Because GKE uses this network when creating routes and firewalls for the cluster, as a best practice define a network configuration that meets your security and networking requirements for ingress and egress traffic, instead of using the default network. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot change the network attached to an existing GKE cluster. To resolve this alert, create a new cluster with a custom network that meets your requirements, then migrate the cluster data from the reported cluster to this newly created GKE cluster and delete the reported GKE cluster.\n\nTo create new Kubernetes engine cluster with the custom network, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Set new cluster parameters as per your requirement and make sure 'Network' is set to other than 'default' under Networking section.\n6. Click on Save\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and ['sqlServer'].['properties.privateEndpointConnections'] is empty```,"Azure SQL Database server not configured with private endpoint This policy identifies Azure SQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for SQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure SQL database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure Portal \n2. Navigate to 'SQL Servers' and select the reported server\n3. Open the Private endpoint settings\n4. Click on Add Private endpoint to create and add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-sql-portal." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/iam.serviceAccountAdmin) and (roles[*] contains roles/iam.serviceAccountUser)'```,"GCP IAM Users have overly permissive service account privileges This policy identifies IAM users which have overly permissive service account privileges. Any user should not have Service Account Admin and Service Account User, both roles assigned at a time. Built-in/Predefined IAM role Service Account admin allows the user to create, delete, manage service accounts. Built-in/Predefined IAM role Service Account User allows the user to assign service accounts to Apps/Compute Instances. It is recommended to follow the principle of 'Separation of Duties' ensuring that one individual does not have all the necessary permissions to be able to complete a malicious action or meant to help avoid security or privacy incidents and errors. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM\n4. From the list of users, choose the reported IAM user\n5. Click on Edit permissions pencil icon\n6. For member having 'Service Account admin' and 'Service Account User' roles granted/assigned, Click on the Delete Bin icon to remove the role from a member." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and properties.privateEndpointConnections[*] does not exist```,"Azure Key vault Private endpoint connection is not configured This policy identifies Key vaults that are not configured with a private endpoint connection. Azure Key vault private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Key vault from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure a Private endpoint connection to Key vaults. For more details: https://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer following URL for configuring Private endpoints on your Key vaults:\nhttps://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service\n\nNOTE: The Key vault associated with private endpoints should not be allowing access from all networks in Firewalls and virtual networks section, make sure the Selected networks are configured with restrictive Virtual networks access. Otherwise the security provided by private endpoints will not be satisfied.." ```config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = '(serverBlobAuditingPolicy does not exist or serverBlobAuditingPolicy is empty or serverBlobAuditingPolicy.properties.state equals Disabled or serverBlobAuditingPolicy.properties.retentionDays does not exist or (serverBlobAuditingPolicy.properties.storageEndpoint is not empty and serverBlobAuditingPolicy.properties.state equals Enabled and serverBlobAuditingPolicy.properties.retentionDays does not equal 0 and serverBlobAuditingPolicy.properties.retentionDays less than 90))' as X; config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = '(blobAuditPolicy does not exist or blobAuditPolicy is empty or blobAuditPolicy.properties.retentionDays does not exist or (blobAuditPolicy.properties.storageEndpoint is not empty and blobAuditPolicy.properties.state equals Enabled and blobAuditPolicy.properties.retentionDays does not equal 0 and blobAuditPolicy.properties.retentionDays less than 90))' as Y; filter '$.Y.blobAuditPolicy.id contains $.X.sqlServer.name'; show Y;```,"Azure SQL Database with Auditing Retention less than 90 days This policy identifies SQL Databases that have Auditing Retention less than 90 days. Audit Logs can be used to check for anomalies and gives insight into suspected breaches or misuse of information and access. If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended to configure SQL database Audit Retention to be greater than or equal to 90 days and leave the database-level auditing disabled for all databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended that you enable only server-level auditing setting and leave the database-level auditing disabled for all databases.\n\nTo configure the Server level audit setting:\n1. Log in to the Azure Portal\n2. Go to SQL servers\n3. Choose the reported each DB server\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\nNote: The default value for the retention period is 0 (unlimited retention).\n7. Click on 'Save'\n\nIt is recommended to avoid enabling both server auditing and database blob auditing together, unless; If you want to use a different storage account, retention period or Log Analytics Workspace for a specific database or want to use for audit event types or categories for a specific database that differ from the rest of the databases on the server.\nTo configure the Database level audit setting:\n1. Log in to the Azure Portal\n2. Go to SQL databases\n3. Choose the reported each DB\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\nNote: The default value for the retention period is 0 (unlimited retention).\n7. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-eks-describe-cluster' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[*].ipv4Ranges[*] contains 0.0.0.0/0 or ipPermissions[*].ipv6Ranges[*] contains ::/0) as Y; filter '$.X.resourcesVpcConfig.securityGroupIds contains $.Y.groupId or $.X.resourcesVpcConfig.clusterSecurityGroupId contains $.Y.groupId'; show Y;```,"AWS EKS cluster security group overly permissive to all traffic This policy identifies EKS cluster Security groups that are overly permissive to all traffic. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on 'Inbound Rules'\n5. Remove the rule which has the 'Source' value as 0.0.0.0/0 or ::/0." ```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Outbound and (sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationAddressPrefix equals * or destinationAddressPrefix equals Internet))] exists```,"Azure Network Security Group with overly permissive outbound rule This policy identifies NSGs with overly permissive outbound rules allowing outgoing traffic from source type any or source with public IP range. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic to known sources on authorized protocols and ports. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Azure Portal\n2. On left Navigation, Click on All services\n3. Under NETWORKING, click on Network security groups\n4. Choose the reported resource\n5. Under SETTINGS, Click on Outbound security rules\n6. Identify the row which matches conditions mentioned below:\na) Source: Any, public IPs\nb) Destination: Any\nc) Action: Allow\n7. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-recovery-service-vault' AND json.rule = properties.provisioningState equals Succeeded and (identity does not exist or identity.type equal ignore case ""None"")```","Azure Recovery Services vault is not configured with managed identity This policy identifies Recovery Services vaults that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Recovery Services vault. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Recovery Services vaults dashboard\n3. Click on the reported Recovery Services vault\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-ecs-container-instance' AND json.rule = status equals ACTIVE as X; config from cloud.resource where api.name = 'aws-ec2-describe-volumes' AND json.rule = state contains in-use and encrypted is false as Y; filter '$.Y.attachments[*].instanceId contains $.X.ec2InstanceId'; show Y;```,"AWS ECS Cluster instance volume encryption for data at rest is disabled This policy identifies the ECS Cluster instance volumes for which encryption for data at rest is disabled. Encrypting data at rest reduces unintentional exposure of data and prevents unauthorized users from accessing sensitive data on your AWS ECS clusters. It is recommended to configure encryption for your ECS cluster instance volumes using an encryption key. NOTE: ECS can be launched using ECS Fargate launch type or EC2 Instance. ECS Fargate launch type pulls images from the Elastic Container Registry, which are transmitted over HTTPS and are automatically encrypted at rest using S3 server-side encryption. So this policy is only applicable to ECS launched using EC2 Instances. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption to your ECS Cluster instance volumes, follow below URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html\n\nNOTE: The existing EBS volumes or snapshots cannot be encrypted, but when you copy unencrypted snapshots, or restore unencrypted volumes, the resulting snapshots or volumes are encrypted.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_executor_stats or settings.databaseFlags[?any(name contains log_executor_stats and value contains on)] exists)""```","GCP PostgreSQL instance database flag log_executor_stats is not set to off This policy identifies PostgreSQL database instances in which database flag log_executor_stats is not set to off. The log_executor_stats flag enables a crude profiling method for logging PostgreSQL executor performance statistics. Even though it can be useful for troubleshooting, it may increase the number of logs significantly and have performance overhead. It is recommended to set log_executor_stats off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in the 'Flags' section, choose the flag 'log_executor_stats' from the drop-down menu, and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_executor_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'." ```config from cloud.resource where api.name = 'aws-ec2-describe-images' AND json.rule = image.blockDeviceMappings[*].deviceName exists```,"haridemo This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are [None]. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = config.remoteDebuggingEnabled is true```,"Azure App Services Remote debugging is enabled This policy identifies Azure App Services which has Remote debugging enabled. Enabling Remote debugging feature opens up inbound ports on App Services. It is recommended to disabled Azure App Services Remote debugging. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'App Services' from the left pane\n3. Select the reported App Services\n4. Go to 'Configurations' under 'Settings'\n5. Click on 'General settings'\n6. Select 'Off' for 'Remote debugging' under 'Debugging section\n7. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(135,135) or destinationPortRanges[*] contains _Port.inRange(135,135) ))] exists```","Azure Network Security Group allows all traffic on Windows RPC (TCP Port 135) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows RPC (TCP Port 135). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict Windows RPC solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where api.name = 'aws-dms-replication-task' AND json.rule = ReplicationTaskSettings.Logging.EnableLogging is false or ReplicationTaskSettings.Logging.LogComponents[?any( Id is member of (""SOURCE_CAPTURE"",""SOURCE_UNLOAD"") and Severity is not member of (""LOGGER_SEVERITY_DEFAULT"",""LOGGER_SEVERITY_DEBUG"",""LOGGER_SEVERITY_DETAILED_DEBUG"") )] exists```","AWS DMS replication task for the source database have logging not set to the minimum severity level This policy identifies AWS DMS replication task where logging is either not enabled or set below the minimum severity level, such as LOGGER_SEVERITY_DEFAULT, for SOURCE_CAPTURE and SOURCE_UNLOAD. Logging is indispensable in DMS replication for various purposes, including monitoring, troubleshooting, auditing, performance analysis, error detection, recovery, and historical reporting. SOURCE_CAPTURE captures ongoing replication or CDC data from the source database, while SOURCE_UNLOAD unloads data during full load. Logging these tasks is crucial for ensuring data integrity, compliance, and accountability during migration. It is recommended to enable logging for AWS DMS replication tasks and set a minimal logging level of DEFAULT for SOURCE_CAPTURE and SOURCE_UNLOAD to ensure that essential messages are logged, facilitating effective monitoring, troubleshooting, and compliance efforts. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for Source capture and Source Unload DMS replicatation tasks log components during migration:\n\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to 'Migration & Transfer' from the 'Services' dropdown and select 'Database Migration Service'\n4. In the navigation panel, under 'Migrate data', click on 'Database migration tasks'\n5. Select the reported replication task and choose 'Modify' from the 'Actions' dropdown on the right\n6. Under the 'Task settings' section, enable 'Turn on CloudWatch logs' under 'Task logs'\n7. Set the log component severity for both 'Source capture' and 'Source Unload' components to 'Default' or greater according to your business requirements\n8. Click 'Save' to save the changes." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/securitySolutions/write"" as X; count(X) less than 1```","Azure Activity log alert for Create or update security solution does not exist This policy identifies the Azure accounts in which activity log alert for Create or update security solution does not exist. Creating an activity log alert for Create or update security solution gives insight into changes to the active security solutions and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Security Solutions (Microsoft.Security/securitySolutions)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where api.name = 'aws-code-build-project' AND json.rule = environment.environmentVariables[*].name exists and environment.environmentVariables[?any( (name contains ""AWS_ACCESS_KEY_ID"" or name contains ""AWS_SECRET_ACCESS_KEY"" or name contains ""PASSWORD"" ) and type equals ""PLAINTEXT"")] exists```","AWS CodeBuild project environment variables contain plaintext AWS credentials This policy identifies the AWS CodeBuild project that contains the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and Password in plaintext. AWS CodeBuild environment variables configure build settings, pass contextual information, and manage sensitive data during the build process. Authentication credentials like AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access. It is recommended that AWS CodeBuild environment variables be securely managed using AWS Secrets Manager or AWS Systems Manager Parameter Store to store sensitive data and remove plaintext credentials. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remove environment variables from a AWS CodeBuild project,\n\n1. Log in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Navigate to 'Developer Tools' from the 'Services' dropdown and select the 'CodeBuild'.\n4. In the navigation pane, choose 'Build projects'.\n5. Select the reported Build project and choose Edit, then click 'Environment' and Expand 'Additional configuration'.\n6. Choose 'Remove' next to the environment variables that contain plaintext credentials.\n7. When you have finished changing your CodeBuild environment configuration, click ‘Update environment’.\n\nYou can store environment variables with sensitive values in the AWS Systems Manager Parameter Store or AWS Secrets Manager and then retrieve them from your build spec according to your business requirements.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```,"Informational - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save." ```config from cloud.resource where api.name='aws-cloudtrail-describe-trails' AND cloud.type = 'aws' AND json.rule = 'kmsKeyId does not exist'```,"AWS CloudTrail logs are not encrypted using Customer Master Keys (CMKs) Checks to ensure that CloudTrail logs are encrypted. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to encrypt the CloudTrail data since it may contain sensitive information. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'CloudTrail' service.\n2. For each trail, under Configuration > Storage Location, select 'Yes' to 'Encrypt log files' setting\n3.Choose and existing KMS key or create a new one to encrypt the logs with.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = binaryAuthorization.evaluationMode does not exist or binaryAuthorization.evaluationMode equal ignore case EVALUATION_MODE_UNSPECIFIED or binaryAuthorization.evaluationMode equal ignore case DISABLED```,"GCP Kubernetes Engine Clusters have binary authorization disabled This policy identifies Google Kubernetes Engine (GKE) clusters that have disabled binary authorization. Binary authorization is a security control that ensures only trusted container images are deployed on GKE clusters. As a best practice, verify images prior to deployment to reduce the risk of running unintended or malicious code in your environment. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Binary authorization for a GKE cluster, please refer to the URL given below:\nhttps://cloud.google.com/binary-authorization/docs/enable-cluster#console." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-virtual-machine-scale-set' AND json.rule = properties.virtualMachineProfile.storageProfile.osDisk.vhdContainers exists```,"Azure Virtual machine scale sets are not utilising Managed Disks This policy identifies Azure Virtual machine scale sets which are not utilising Managed Disks. Using Azure Managed disk over traditional BLOB storage based VHD's has more advantage features like Managed disks are by default encrypted, reduces cost over storage accounts and more resilient as Microsoft will manage the disk storage and move around if underlying hardware goes faulty. It is recommended to move BLOB based VHD's to Managed Disks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently migrating Virtual machine scale sets VHD disks to Azure Managed Disks is not available.\nIt is recommended that all new future scale sets be deployed with managed disks.\n\nFollow steps given in the URL to create new Virtual machine Scale sets,\n\nhttps://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-event-hub-namespace' AND json.rule = authorizationRules[*].name exists and authorizationRules[?any(name does not equal RootManageSharedAccessKey)] exists```,"Azure Event Hub Namespace having authorization rules except RootManageSharedAccessKey This policy identifies Azure Event Hub Namespaces which have authorization rules except RootManageSharedAccessKey. Having Azure Event Hub namespace authorization rules other than 'RootManageSharedAccessKey' could provide access to all queues and topics under the namespace which pose a risk if these additional rules are not properly managed or secured. As best practice, it is recommended to remove Event Hub namespace authorization rules other than RootManageSharedAccessKey and create access policies at the entity level, which provide access to only that specific entity for queues and topics. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Event Hubs' dashboard\n3. Select the reported Event Hubs Namespace\n4. Select 'Shared access policies' under 'Settings' section\n5. Delete all other Shared access policy rules except 'RootManageSharedAccessKey'.." "```config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = sqlDatabase.properties.status equals Online and (securityAlertPolicy.properties.state equals Disabled or securityAlertPolicy does not exist or securityAlertPolicy.[*] isEmpty) as X; config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equals Ready and (serverSecurityAlertPolicy.properties.state equals Disabled or serverSecurityAlertPolicy does not exist or serverSecurityAlertPolicy isEmpty) as Y; filter ""$.X.blobAuditPolicy.id contains $.Y.sqlServer.name""; show X;```","Azure SQL databases Defender setting is set to Off This policy identifies Azure SQL databases which have Defender setting set to Off. Azure Defender for SQL provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, SQL injection attacks, as well as anomalous database access patterns. Advanced threat protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If Azure Defender is enabled at server level it will also be applied to all the database, regardless of the database Azure Defender settings. It is recommended that you enable only server-level Azure Defender settings.\nTo enable auditing at server level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database and SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'\n\nIt is recommended to avoid enabling Azure Defender in both server and database.\nIf you want to enable different storage account, email addresses for scan and alert notifications or 'Advanced Threat Protection types' for a specific database that differ from the rest of the databases on the server. Then to enable auditing at database level by:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database\n3. Select 'SQL databases', Click on the SQL database instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'." ```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-iam-list-server-certificates' as Y; filter '($.X.description.listenerDescriptions[*].listener.sslcertificateId equals $.Y.arn and ((_DateTime.ageInDays($.Y.expiration) > -90 and (_DateTime.ageInDays($.Y.expiration) < 0 or _DateTime.ageInDays($.Y.expiration) == 0)) or (_DateTime.ageInDays($.Y.expiration) > 0)))'; show X;```,"AWS Elastic Load Balancer (ELB) with IAM certificate expiring in 90 days This policy identifies Elastic Load Balancers (ELB) which are using IAM certificates expiring in 90 days or using expired certificates. Removing expired IAM certificates eliminates the risk and prevents the damage of credibility of the application/website behind the ELB. As a best practice, it is recommended to reimport expiring certificates while preserving the ELB associations of the original certificate. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Removing invalid certificates via AWS Management Console is not currently supported. To delete/upload SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\nRemediation CLI:\n1. Run describe-load-balancers command to make sure that the expiring server certificate is not currently used by any active load balancer.\naws elb describe-load-balancers --region --load-balancer-names --query 'LoadBalancerDescriptions[*].ListenerDescriptions[*].Listener.SSLCertificateId'\nThis command output will return the Amazon Resource Name (ARN) for the SSL certificate currently used by the selected ELB:\n[\n [\n \""arn:aws:iam::1234567890:server-certificate/MyCertificate\""\n ]\n]\n2. Create new AWS IAM certificate with your desired parameters value\n3. To upload new IAM Certificate:\naws iam upload-server-certificate --server-certificate-name --certificate-body file://Certificate.pem --certificate-chain file://CertificateChain.pem --private-key file://PrivateKey.pem\n4. To replaces the existing SSL certificate for the specified HTTPS load balancer:\naws elb set-load-balancer-listener-ssl-certificate --load-balancer-name --load-balancer-port 443 --ssl-certificate-id arn:aws:iam::1234567890:server-certificate/\n5. Now that is safe to remove the expiring SSL/TLS certificate from AWS IAM, To delete it run:\naws iam delete-server-certificate --server-certificate-name ." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-policy' AND json.rule = ""(policy.policyType does not contain System) and (defaultPolicyVersion.policyDocument.Statement[?(@.Resource == '*' && @.Effect== 'Allow')].Action equals *)""```","Alibaba Cloud RAM policy allows full administrative privileges This policy identifies RAM policies with full administrative privileges. RAM policies are the means by which privileges are granted to users, groups or roles. It is recommended to grant the least privilege access like granting only the permissions required to perform a task, instead of allowing full administrative privileges. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Policies'\n4. Click on the reported RAM policy\n5. Under the 'References' tab, 'Revoke Permission' for all users/roles/groups attached to the policy.\n6. Delete the reported policy\n\nDetermine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'attributes.crossZoneLoadBalancing.enabled is false'```,"AWS Elastic Load Balancer (Classic) with cross-zone load balancing disabled This policy identifies Classic Elastic Load Balancers which have cross-zone load balancing disabled. When Cross-zone load balancing enabled, classic load balancer distributes requests evenly across the registered instances in all enabled Availability Zones. Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Description tab, choose 'Change cross-zone load balancing setting'\n7. On the 'Configure Cross-Zone Load Balancing' popup dialog, select 'Enable'\n8. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = ""nodePools[*].config.metadata does not exist or nodePools[*].config.metadata does not contain disable-legacy-endpoints or nodePools[*].config.metadata.disable-legacy-endpoints does not contain true""```","GCP Kubernetes Engine Clusters have legacy compute engine metadata endpoints enabled This policy identifies Google Kubernetes Engine (GKE) clusters that have legacy compute engine metadata endpoints enabled. Because GKE uses instance metadata to configure node VMs, some of this metadata is potentially sensitive and should be protected from workloads running on the cluster. Legacy metadata APIs expose the Compute Engine's instance metadata of server endpoints. As a best practice, disable legacy API and use v1 APIs to restrict a potential attacker from retrieving instance metadata. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You can currently disable legacy metadata APIs only when creating a new cluster, or when adding a new node pool to an existing cluster. To fix this alert, create a new GKE cluster with legacy metadata APIs disabled, migrate all required data from the reported cluster to the newly created cluster before you delete the reported GKE cluster.\n\nTo create new Kubernetes engine cluster with private node feature enabled, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Under the Node pools section, Click on the 'More node pool options' button\n6. On 'Edit node pool' window, For 'GCE instance metadata' click on 'Add metadata'\n7. Add 'disable-legacy-endpoints' as a metadata key and 'true' as a metadata value\n8. Click on 'Save'\n9. Click on 'Create'\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-subnet-list' AND json.rule = networkSecurityGroupId does not exist and name does not equal ignore case ""GatewaySubnet"" and name does not equal ignore case ""RouteServerSubnet"" and name does not equal ignore case ""AzureFirewallSubnet"" and name does not equal ignore case ""AzureFirewallManagementSubnet"" and ['properties.delegations'][*].['properties.serviceName'] does not equal ""Microsoft.Netapp/volumes""```","Azure Virtual Network subnet is not configured with a Network Security Group This policy identifies Azure Virtual Network (VNet) subnets that are not associated with a Network Security Group (NSG). While binding an NSG to a network interface of a Virtual Machine (VM) enables fine-grained control of the VM, associating an NSG to a subnet enables better control over network traffic to all resources within a subnet. It is recommended to associate an NSG with a subnet so that you can protect your VMs on a subnet-level. For more information, https://learn.microsoft.com/en-gb/archive/blogs/igorpag/azure-network-security-groups-nsg-best-practices-and-lessons-learned https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#limitations Note: This policy will not report for subnets used by Azure Firewall Subnet, Azure Firewall Management Subnet, Gateway Subnet, NetApp File Share, Route Server Subnet, Private endpoints and Private links as Azure recommends not to configure Network Security Group (NSG) for these services. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal.\n2. Select 'Virtual Networks', and select the virtual network you need to modify.\n3. Select 'Subnets', and select the subnet you need to modify.\n4. Select the Network security group (NSG) you want to associate with the subnet.\n5. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-insights-component' AND json.rule = properties.provisioningState equals Succeeded and (properties.publicNetworkAccessForQuery equals Enabled or properties.publicNetworkAccessForIngestion equals Enabled)```,"Azure Application Insights configured with overly permissive network access This policy identifies Application Insights configured with overly permissive network access. Virtual network access configuration in Application Insights allows you to restrict data ingestion and queries coming from public networks. It is recommended to configure the Application Insights with virtual networks access configuration set to restrict, so that the Application Insight is accessible only to restricted Azure Monitor Private Link Scopes. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Application Insights dashboard \n3. Click on the reported Application Insights\n4. Under the 'Configure' menu, click on 'Network Isolation'\n5. Create a Azure Monitor Private Link Scope if it is not already created by referring:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/logs/private-link-configure#create-an-azure-monitor-private-link-scope\n6. After creating, Under 'Virtual networks access configuration', \nSet 'Accept data ingestion from public networks not connected through a Private Link Scope' to 'No' and \nSet 'Accept queries from public networks not connected through a Private Link Scope' to 'No'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals SqlServers and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud is set to Off for Azure SQL Databases This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Azure SQL Databases is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Azure SQL Databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Azure SQL Databases' Select 'On' under Plan.\n8. Select 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-cloudtrail-get-trail-status' as Y; filter '(($.X.name == $.Y.trail) and ($.X.cloudWatchLogsLogGroupArn is not empty and $.X.cloudWatchLogsLogGroupArn exists) and $.X.isMultiRegionTrail is false and ($.Y.status.latestCloudWatchLogsDeliveryTime exists))'; show X;```,"AWS CloudTrail logs should integrate with CloudWatch for all regions This policy identifies the Cloudtrails which is not integrated with cloudwatch for all regions. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into AWS and navigate to CloudTrail service.\n2. Click on Trail in the left menu navigation and choose the reported cloudtrail.\n3. Go to CloudWatch Logs section and click Configure.\n4. Define a new or select an existing log group and click Continue to complete the process.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(1433,1433) or destinationPortRanges[*] contains _Port.inRange(1433,1433) ))] exists```","Azure Network Security Group allows all traffic on SQL Server (TCP Port 1433) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SQL Server (TCP Port 1433). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SQL Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","Medium of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where api.name = 'oci-database-autonomous-database' AND json.rule = lifecycleState contains AVAILABLE AND whitelistedIps is member of (""null"") AND privateEndpoint is member of (""null"")```","OCI Oracle Autonomous Database (ADB) access is not restricted to allowed sources or deployed within a Virtual Cloud Network This policy identifies Oracle Autonomous Databases (ADBs) that are not restricted to specific sources or not deployed within a Virtual Cloud Network (VCN). Autonomous Database automates critical database management tasks, and restricting its access to corporate IP addresses or VCNs is crucial for enhancing security. Deploying Autonomous Databases within a VCN and configuring access control rules ensure that only authorized sources can connect, significantly reducing the risk of unauthorized access. This protection is vital for maintaining the integrity and security of the databases. As best practice, it is recommended to have new Autonomous Database instances deployed within a VCN, and existing instances should have access control rules set to restrict connectivity to approved sources. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To configure the OCI Oracle Autonomous Database (ADB) access, refer to the following documentation:\nhttps://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/access-control-rules-autonomous.html#GUID-F0B59281-E545-48B1-BA49-1FD51B65D123." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.apiServerAccessProfile.enablePrivateCluster is false and (properties.apiServerAccessProfile.authorizedIPRanges does not exist or properties.apiServerAccessProfile.authorizedIPRanges is empty)```,"aweawoie This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-mq-broker' AND json.rule = 'brokerState equals RUNNING and publiclyAccessible is true'```,"AWS MQ is publicly accessible This policy identifies the AWS MQ brokers which are publicly accessible. It is advisable to use MQ brokers privately only from within your AWS Virtual Private Cloud (VPC). Ensure that the AWS MQ brokers provisioned in your AWS account are not publicly accessible from the Internet to avoid sensitive data exposure and minimize security risks. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: Note: MQ broker configuration for public access cannot be modified. In case we need to update the MQ not to be public we need to recreate it.\n\n1. Go to AWS console\n2. Navigate to service 'Amazon MQ' from the 'Services' Menu\n3. From the list of 'Brokers' select the reported MQ broker\n4. From 'Details' section, copy all the configuration information.\n5. Within 'Users' section, locate and copy the ActiveMQ Web Console access credentials.\n6. Click on 'Brokers' from left panel, click on 'Create broker' \n7. Provide an unique name in field 'Broker name'\n8. In 'Advanced settings' section, select 'No' for 'Public accessibility'\n9. Set the new broker configuration parameters using the information copied at step no. 4\n10. Set the existing ActiveMQ Web Console access credentials copied at step no. 5\n11. Click on 'Create broker'\n12. Once the new broker is created, you can replace the broker endpoints within your applications\n\nTo delete the publicly accessible broker, \n1. select the alerted from the list of 'Brokers' \n2. Click on 'Delete' button\n3. When a dialog box pops up, enter the broker name to confirm and click on 'delete' button." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy vwptv This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-secretsmanager-secret' AND json.rule = expireTime does not exist```,"GCP Secrets Manager secret has no expiration date This policy identifies GCP Secret Manager secrets that have no expiration date. GCP Secret Manager securely stores and controls access to API keys, passwords, certificates, and other sensitive data. Without an expiration date, secrets remain vulnerable indefinitely. Setting an expiration date limits the potential damage of a security breach, as compromised credentials will eventually become invalid. It is recommended to configure secrets with an expiration date to reduce the risk of long-lived secrets being compromised or abused. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Secrets Manager' page\n2. Under 'Secrets', click on the reported secret\n3. Select 'EDIT SECRET' on the top navigation bar\n4. Under the 'Edit secret' page, under 'Expiration', select the 'Set expiration date' checkbox and set the date and time for expiration\n5. Click on 'UPDATE SECRET'.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_without_asset_type_finding_1 Description-bf90f2fb-d709-4040-a033-b74ef4a2f6d8 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix does not equal 96.116.133.104/32 or sourceAddressPrefix does not equal 96.116.134.8/32 or sourceAddressPrefix does not equal 96.118.251.38/32 or sourceAddressPrefix does not equal 96.118.251.70/32 or sourceAddressPrefix does not equal 2001:558:fc0c::f816:3eff:fe2b:7e9f/128 or sourceAddressPrefix does not equal 2001:558:fc0c::f816:3eff:fe2d:f8c0/128 or sourceAddressPrefix does not equal 2001:558:fc18:2:f816:3eff:fea9:fec9/128 or sourceAddressPrefix does not equal 2001:558:fc18:2:f816:3eff:fe86:aa73/128) and (destinationPortRange contains _Port.inRange(22,22) or destinationPortRanges[*] contains _Port.inRange(22,22) ))] exists```","comcast-policy This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = authorizationRules[*] size greater than 1 and authorizationRules[?any(name does not equal RootManageSharedAccessKey and properties.rights contains Manage)] exists```,"Azure Service bus namespace configured with overly permissive authorization rules This policy identifies Azure Service bus namespaces configured with overly permissive authorization rules. Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. It is recommended to follow the least privileged security model, should create access policies at the entity level for queues and topics to provide access to only the specific entity. All authorization rules except RootManageSharedAccessKey should be removed from the Service bus namespace. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Service Bus\n3. Select the reported Service bus namespace\n4. Click on 'Shared access policies' under 'Settings'\n5. Select and remove all authorization rules except RootManageSharedAccessKey.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'instancesAttached is false'```,"AWS Elastic Load Balancer (ELB) not in use This policy identifies unused Elastic Load Balancers (ELBs) in your AWS account. Any Elastic Load Balancer in your AWS account is adding charges to your monthly bill, although it is not used by any resources. As a best practice, it is recommended to remove ELBs that are not associated with any instances, it will also help you avoid unexpected charges on your bill. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To check and remove ELB that has no registered instances, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. In the navigation pane, under 'LOAD BALANCING', click on 'Load Balancers'\n5. Select reported Elastic Load Balancer\n6. Select the 'Description' tab from the bottom panel\n7. In 'Basic Configuration' section, see If the selected load balancer 'Status' is '0 of 0 instances in service'.\nIt means that there are no registered instances and the ELB can be safely removed.\n8. Click the 'Actions' dropdown from the ELB dashboard top menu\n9. Select Delete\n10. In the 'Delete Load Balancer' pop-up dialog, confirm the action to delete on clicking 'Yes, Delete' button." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = backendType equals SECOND_GEN and ipAddresses[*].type contains PRIMARY```,"GCP SQL database is assigned with public IP This policy identifies GCP SQL databases which are assigned with public IP. To lower the organisation's attack surface, Cloud SQL databases should not have public IPs. Private IPs provide improved network security and lower latency for your application. It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the reported SQL instance, \n4. On overview page, click on 'EDIT' from top menu\n5. Under 'Configuration options' Click on 'Connectivity'\n6. From dropdown deselect 'Public IP' checkbox \n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals OpenSourceRelationalDatabases and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud set to Off for Open-Source Relational Databases This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Open-Source Relational Databases set to Off. Enabling Azure Defender for cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Microsoft Defender for Cloud detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It is highly recommended to enable Azure Defender for Open-Source Relational Databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click 'Select types >' in the row for 'Databases'\n7. Set the radio button next to 'Open-source relational databases' to 'On'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.ageInDays($.X.properties.updatedOn) < 1) and (($.X.properties.principalId contains $.Y.id)))'; show X;```,"llatorre - RoleAssigment v2 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = ""aws-ec2-describe-instances"" AND json.rule = architecture contains ""foo""```","API automation policy pkifp This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = 'keyMetadata.keyState contains PendingDeletion'```,"AWS KMS Key scheduled for deletion This policy identifies KMS Keys which are scheduled for deletion. Deleting keys in AWS KMS is destructive and potentially dangerous. It deletes the key material and all metadata associated with it and is irreversible. After a key is deleted, you can no longer decrypt the data that was encrypted under that key, which means that data becomes unrecoverable. You should delete a key only when you are sure that you don't need to use it anymore. If you are not sure, It is recommended that to disable the key instead of deleting it. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You should delete a KMS key only when you are sure that you don't need to use it anymore. To fix this alert, If you sure you no longer need a reported KMS key; dismiss the alert. If you are not sure, consider disabling the KMS key instead of deleting it.\n\nTo enable KMS CMKs which are scheduled for deletion, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Click on 'Key actions' dropdown\n7. Click on 'Cancel key deletion'\n8. Click on 'Enable'." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = ['attributes'].['load_balancing.cross_zone.enabled'] is false```,"AWS Elastic Load Balancer v2 (ELBv2) with cross-zone load balancing disabled This policy identifies load balancers that do not have cross-zone load balancing enabled. Cross-zone load balancing evenly distributes incoming traffic across healthy targets in all availability zones. This can help to ensure your application can manage additional traffic and limit the risk of any single availability zone getting overwhelmed and perhaps affecting load balancer performance. It is recommended to enable cross-zone load balancing. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable cross-zone load balancing, please follow the below steps:\n\n1. Log in to the AWS console.\n2. Go to the EC2 Dashboard and select 'Load Balancers'\n3. Click on the reported load balancer. Under the 'Actions' dropdown, select 'Edit load balancer attributes'.\n4. For Gateway load balancers, under 'Availability Zone routing Configuration', enable 'Cross-zone load balancing'.\n5. For Network load balancers, under 'Availability Zone routing Configuration', select the 'Enable cross-zone load balancing' option.\n6. Click on 'Save changes'.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-watcher-list' AND json.rule = provisioningState equals Succeeded as X; count(X) less than 1```,"Azure Network Watcher not enabled This policy identifies Azure subscription regions where Network Watcher is not enabled. Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Without Network Watcher enabled, you lose critical capabilities to monitor and diagnose network issues, making it difficult to identify and resolve performance bottlenecks, network security rules, and connectivity issues. As a best practice, it is recommended to enable Azure Network Watcher for your region to leverage its monitoring and diagnostic capabilities. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Enabling Network Watcher will incur costs. There are additional costs per transaction to run and store network data. For high-\nvolume networks these charges will add up quickly.\n\nTo enable Network Watcher, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-create?tabs=portal#enable-network-watcher-for-your-region." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'listenerPortsAndProtocal[*].listenerProtocal equals https and ([*].tlscipherPolicy equals tls_cipher_policy_1_0 or [*].tlscipherPolicy equals tls_cipher_policy_1_1)'```,"Alibaba Cloud SLB listener is configured with SSL policy having TLS version 1.1 or lower This policy identifies Server Load Balancer (SLB) listeners which are configured with SSL policy having TLS version 1.1 or lower. As a best security practice, use TLS 1.2 as the minimum TLS version in your load balancers SSL security policies. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, Choose HTTPS Listener, Click on 'Configure'\n5. In the 'Configure Listener' page, Click on 'Next'\n6. In the 'SSL Certificates', Click on 'Modify' for 'Advanced' section\n7. For 'TLS Security Policy', Choose TLS 1.2 or later version policy as per your requirement.\n8. Click on 'Next'\n9. If no changes to Backend Servers and Health Check, Click on 'Next'\n10. In 'Submit' section, click on 'Submit'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[*].properties.email is empty or securityContacts[*].properties.alertsToAdmins equal ignore case Off) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```,"Azure Microsoft Defender for Cloud email notification for subscription owner is not set This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) in which email notification for subscription owners is not set. Enabling security alert emails to subscription owners ensures that they receive security alert emails from Microsoft. This ensures that they are aware of any potential security issues and can mitigate the risk in a timely fashion. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. In the drop down of the 'All users with the following roles' field select 'Owner'\n7. Select 'Save'." ```config from cloud.resource where api.name = 'azure-frontdoor' AND json.rule = properties.provisioningState equals Succeeded as X; config from cloud.resource where api.name = 'azure-frontdoor-waf-policy' as Y; filter '$.X.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id does not exist or ($.X.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id equal ignore case $.Y.id and $.Y.properties.policySettings.enabledState equals Disabled)'; show X;```,"Azure Front Door does not have the Azure Web application firewall (WAF) enabled This policy identifies Azure Front Doors which do not have the Azure Web application firewall (WAF) enabled. As a best practice, configure the Azure WAF service on the Front Doors to protect against application-layer attacks. To block malicious requests to your Front Doors, define the block criteria in the WAF rules. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Front Doors'\n3. Click on the reported Front Door\n4. Click on the 'Web application firewall' from the left panel\n5. Select the frontend to attach WAF policy and Click on 'Apply Policy'\n6. In 'Associate a Waf policy' dialog, select appropriate enabled WAF policy from the 'Policy' dropdown.\n7. Click on 'Add' \n8. Click on 'Save' to save your changes." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equal ignore case running and kind contains workflowapp and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist and (properties.privateLinkIdentifiers does not exist or properties.privateLinkIdentifiers is empty))) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists'```,"Azure Logic app configured with public network access This policy identifies Azure Logic apps that are configured with public network access. Exposing Logic Apps directly to the public internet increases the attack surface, making them more susceptible to unauthorized access, security threats, and potential breaches. By limiting Logic Apps to private network access, they are securely managed and less prone to external vulnerabilities. As a security best practice, it is recommended to configure private network access or restrict the public exposure only to the required entities instead of wide ranges. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Networking'\n5. On the 'Networking' page, under 'Inbound traffic configuration' section, select the 'Public network access' setting.\n6. On the 'Access Restrictions' page, review the list of access restriction rules that are defined for your app and avoid providing access to all networks.\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith es:)] exists```,"AWS Elasticsearch IAM policy overly permissive to all traffic This policy identifies Elasticsearch IAM policies that are overly permissive to all traffic. Amazon Elasticsearch service makes it easy to deploy and manage Elasticsearch. Customers can create a domain where the service is accessible. The domain should be granted access restrictions so that only authorized users and applications have access to the service. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Goto the IAM Services\n3. Click on 'Policies' in the left-hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under the Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'Elasticsearch Service', click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""sysdig-monitor"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance"",""sysdigTeam""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Cloud Monitoring Service This policy identifies IBM Cloud users with overly permissive IBM Cloud Monitoring Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Monitoring' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (logPublishingOptions does not exist or logPublishingOptions.SEARCH_SLOW_LOGS.enabled is false or logPublishingOptions.SEARCH_SLOW_LOGS.cloudWatchLogsLogGroupArn is empty)'```,"AWS Elasticsearch domain has Search slow logs set to disabled This policy identifies Elasticsearch domains for which Search slow logs is disabled in your AWS account. Enabling support for publishing Search slow logs to AWS CloudWatch Logs enables you to have full insight into the performance of search operations performed on your Elasticsearch clusters. This will help you in identifying performance issues caused by specific search queries so that you can optimize your queries to address the problem. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Select the 'Logs' tab\n6. In 'Set up Search slow logs' section,\n a. click on 'Setup'\n b. In 'Select CloudWatch Logs log group' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n c. In 'Specify CloudWatch access policy', Create new/Select an existing policy as per your requirement\n d. Click on 'Enable'\n\nThe search slow logs setting 'Status' should change now to 'Enabled'.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-bedrock-agent' AND json.rule = agentStatus is not member of ( ""DELETING"",""FAILED"") and guardrailConfiguration.guardrailIdentifier does not exist```","AWS Bedrock agent is not associated with Bedrock guardrails This policy identifies the AWS Bedrock agent that is not associated with Bedrock guardrails Amazon Bedrock Guardrails provides governance and compliance controls for generative AI applications, ensuring safe and responsible model use. Associating Guardrails with the Bedrock agent is useful for implementing governance and compliance controls in generative AI applications. Not linking Guardrails to the Bedrock agent raises the risk of non-compliance and harmful AI application outputs. It is recommended that AWS Bedrock agents be associated with Bedrock guardrails to implement safeguards and prevent unwanted behavior from model responses or user messages. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To associate the aws bedrock agent with the bedrock guardrail, perform the following actions:\n\n1. Log in to the AWS console. and Navigate to Amazon Bedrock console available at https://console.aws.amazon.com/bedrock/.\n2. In the navigation panel, under 'Builder tools', select 'Agents'.\n3. In the Agents, Click on the agent that is reported.\n4. Click on the 'Edit in Agent Builder' button on the right corner.\n5. In the Agent builder window, Under the 'Guardrail details' section click 'Edit' and select the name and version of the Amazon Bedrock guardrail created previously or click on the link to Create a new guardtrail.\n6. Choose 'Save and exit' to attach the selected guardrail to your Amazon Bedrock agent.." ```config from cloud.resource where api.name = 'gcloud-compute-external-backend-service' AND json.rule = logConfig.enable does not exist or logConfig.enable is false```,"GCP Cloud Load Balancer HTTP(S) logging is not enabled This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-replication-instance' AND json.rule = replicationInstanceStatus is not member of ('creating','deleted','deleting') and publiclyAccessible is true```","AWS DMS replication instance is publicly accessible This policy identifies AWS DMS (Database Migration Service) replication instances with public accessibility enabled. A DMS replication instance is used to connect and read the source data and prepare it for consumption by the target data store. When AWS DMS replication instances are publicly accessible, it increases the risk of unauthorized access, data breaches, and potentially malicious activities. It is recommended to disable the public accessibility of DMS replication instances to decrease the attack surface. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Public accessibility can be disabled only at the time of creation, please follow the below steps to create a new DMS replication instance:\n\n1. Sign in to the AWS Management Console and navigate to the AWS DMS console.\n2. In the navigation pane, choose 'Replication instances' and then click the 'Create replication instance' button.\n3. Under the 'Connectivity and security' section, Leave the 'Publicly accessible' option unchecked to ensure that the replication instance does not have public IP addresses or DNS names.\n4. Configure other settings based on your requirements.\n5. Click the 'Create replication instance' button to create the replication instance.\n\nTo delete the reported AWS DMS replication instance, Please follow the below steps:\n\n1. Sign in to the AWS Management Console and navigate to the AWS DMS console.\n2. In the navigation pane, choose 'Replication instances' to see a list of your existing replication instances.\n3. Select the replication instance that you want to delete from the list.\n4. After selecting the replication instance, choose 'Actions' and then 'Delete' from the menu.\n5. A confirmation dialog box will appear. Review the details and confirm that you want to delete the replication instance by selecting the 'Delete' button.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals ""ACTIVE"" AND shieldedInstanceConfig.enableVtpm is false```","GCP Vertex AI Workbench Instance has vTPM disabled This policy identifies GCP Vertex AI Workbench Instances that have the Virtual Trusted Platform Module (vTPM) feature disabled. The Virtual Trusted Platform Module (vTPM) validates the guest VM's pre-boot and boot integrity and provides key generation and protection. The root keys of the vTPM, as well as the keys it generates, cannot leave the vTPM, thereby offering enhanced protection against compromised operating systems or highly privileged project administrators. It is recommended to enable the virtual TPM device on GCP Vertex AI Workbench Instances to support measured boot and other OS security features that require a TPM. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-apimanagement-apigateway' AND json.rule = lifecycleState equal ignore case ACTIVE and (networkSecurityGroupIds[*] is empty or networkSecurityGroupIds[*] does not exist)```,"OCI API Gateway is not configured with Network Security Groups This policy identifies API Gateways that are not configured with Network Security Groups. Network security groups give fine-grained control of resources and help in restricting network access to your Private API Gateway with specific ports or with specific IP address range. As best practice, it is recommended to restrict access to the API Gateway by configuring network security groups. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> API Management -> Gateways\n3. Click on the reported Gateway\n4. Click on the 'Edit' button\nNOTE: Before you update API gateway with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports or with specific IP address range based on requirement.\n5. On the 'Edit gateway' dialog, select the 'Enable network security groups' and select the restrictive Network Security Group \n6. Click on the 'Save Changes' button.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-armor-security-policy' AND json.rule = type does not equal ignore case CLOUD_ARMOR_EDGE and (rules[*].match.expr.expression does not contain cve-canary or rules[?any(match.expr.expression contains cve-canary and action equals allow)] exists)```,"GCP Cloud Armor policy not configured with cve-canary rule This policy identifies GCP Cloud Armor rules where cve-canary is not enabled. Preconfigured WAF rule called ""cve-canary"" can help detect and block exploit attempts of CVE-2021-44228 and CVE-2021-45046 to address the Apache Log4j vulnerability. It is recommended to create a Cloud Armor security policy with rule blocking Apache Log4j exploit attempts. Reference : https://cloud.google.com/blog/products/identity-security/cloud-armor-waf-rule-to-help-address-apache-log4j-vulnerability This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To Update Existing rules follow the below steps,\n\n1. Login to GCP console\n2. Navigate to 'Cloud Armor' from service 'Network Security'(Left Panel)\n3. Click on the alerted policy\n4. Click on the pencil icon on the rule to edit the rule\n5. Under 'Mode', select 'Advanced mode', add expression ""evaluatePreconfiguredExpr('cve-canary')""\n6. Under 'Action', select 'Deny' to block the exploit\n7. Click on 'Update'\n\nTo Add rule follow the below steps,\n\n1. Login to GCP console\n2. Navigate to 'Cloud Armor' from service 'Network Security'(Left Panel)\n3. Click on the alerted policy\n4. Click on 'Add rule'\n5. Under 'Mode', select 'Advanced mode', add expression ""evaluatePreconfiguredExpr('cve-canary')""\n6. Under 'Action', select 'Deny' to block the exploit\n7. Update other details and click on 'Add'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0```,"Copy of Copy of GCP API key is created for a project This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion.." "```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ""((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))"" as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter'$.X.bucketName equals $.Y.s3BucketName'; show X;```","AWS CloudTrail bucket is publicly accessible This policy identifies publicly accessible S3 buckets that store CloudTrail data. These buckets contains sensitive audit data and only authorized users and applications should have access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save\n6. If 'Bucket Policy' is set to public follow below steps\na. Under 'Bucket Policy', modify the policy to remove public access\nb. Click on Save\nc. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus contains available and dbclusterIdentifier does not exist and (engine contains postgres or engine contains mysql) and engineVersion is not member of (8.0.11, 8.0.13, 8.0.15, 9.6.1, 9.6.2, 9.6.3, 9.6.5, 9.6.6, 9.6.8, 9.6.9, 9.6.10, 10.1, 10.3, 10.4, 10.5) and iamdatabaseAuthenticationEnabled is false```","AWS RDS instance not configured with IAM authentication This policy identifies RDS instances that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication provides the network traffic to and from database instances is encrypted using Secure Sockets Layer (SSL), Centrally manage access to your database resources and Profile credentials instead of a password, for greater security. For details: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html NOTE: IAM database authentication works only with MySQL and PostgreSQL. IAM database authentication is not available on all database engines; please refer https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Availability for available versions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable IAM authentication follow the below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Enabling.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cognito-identity-pool' AND json.rule = allowUnauthenticatedIdentities is true```,"AWS Cognito identity pool allows unauthenticated guest access This policy identifies AWS Cognito identity pools that allow unauthenticated guest access. AWS Cognito identity pools unauthenticated guest access and allows unauthenticated users to assume a role in your AWS account. These unauthenticated users will be granted permissions of the assumed role which may have more privileges than that are intended. This could lead to unauthorized access or data leakage. It is recommended to disable unauthenticated guest access for the Cognito identity pools. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To deactivate guest access in an identity pool,\n1. Log in to AWS console\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to the Amazon Cognito dashboard\n4. Under 'Identity pools' section, select the reported identity pool\n5. In 'User access' tab, under 'Guest access' section\n6. Click on 'Deactivate' button to deactivate the guest access configured.\n\nNOTE: Before you deactivate unauthenticated guest access, it is must to have at-least one authenticated access configured in your identity pool.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(53,53) or destinationPortRanges[*] contains _Port.inRange(53,53) ))] exists```","Azure Network Security Group allows all traffic on NetBIOS DNS (TCP Port 53) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on DNS TCP port 53. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireUppercaseCharacters is false or requireUppercaseCharacters does not exist'```,"AWS IAM password policy does not have an uppercase character This policy identifies AWS accounts in which IAM password policy does not have an uppercase character. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one uppercase letter'.\n4. Click on 'Apply password policy'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case ""PowerState/running"" and ['properties.storageProfile'].['imageReference'].['publisher'] equal ignore case microsoftsqlserver and (['properties.osProfile'].['linuxConfiguration'] exists and ['properties.osProfile'].['linuxConfiguration'].['disablePasswordAuthentication'] is false)```","Azure SQL on Virtual Machine (Linux) with basic authentication This policy identifies Azure Virtual Machines that are hosted with SQL on them and have basic authentication. Azure Virtual Machines with basic authentication could allow attackers to brute force and gain access to SQL database hosted on it, which might lead to information leakage. It is recommended to use SSH keys for authentication to avoid brute force attacks on SQL database hosted virtual machines. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure existing Azure Virtual machine with SSH key authentication, Follow below URL:\nhttps://learn.microsoft.com/en-us/azure/virtual-machines/extensions/vmaccess#update-ssh-key\n\nIf changes are not reflecting you may need to take backup, Create new virtual machine with SSH key based authentication and delete the reported virtual machine.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(3389,3389) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on RDP port (3389) This policy identifies GCP Firewall rules which allow all inbound traffic on RDP port (3389). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the RDP port (3389) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where finding.source = 'AWS Inspector' AND finding.type = 'AWS Inspector Security Best Practices'```,"PCSUP-23654 This is applicable to all cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-list' AND json.rule = ( iamPolicy.bindings[*].members contains ""allUsers"" or iamPolicy.bindings[*].members contains ""allAuthenticatedUsers"" ) and ( disabled does not exist or disabled is false )```","GCP Service account is publicly accessible This policy identifies GCP Service accounts that are publicly accessible. GCP Service accounts are intended to be used by an application or compute workload, rather than a person. It can be granted permission to perform actions in the GCP project as any other GCP user. Allowing access to 'allUsers' or 'allAuthenticatedUsers' over a service account would allow unwanted access to the public and could lead to a security breach. As a security best practice, follow the Principle of Least Privilege and grant permissions to entities only on a need basis. It is recommended to avoid granting permission to 'allUsers' or 'allAuthenticatedUsers'. This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to the 'IAM and Admin' service (Left Panel)\n3. Go to 'Service Accounts'\n4. Click on the alerting service account\n5. Under the 'PERMISSIONS' tab, select the 'VIEW BY PRINCIPALS' tab\n6. Select the entries with 'allUsers' or 'allAuthenticatedUsers' \n7. Click on the 'REMOVE ACCESS' to revoke access from 'allusers'/'allAuthenticatedUsers'\n8. Click on 'CONFIRM'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and settings[?any(name equals containerInsights and value equals disabled)] exists```,"AWS ECS cluster with container insights feature disabled This policy identifies ECS clusters that are disabled with the container insights feature. Container Insights collects metrics at the cluster, task, and service levels. As a best practice, enable container insights to start collecting the data available through these logs for the reported ECS cluster. For details: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-cluster.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable container insights feature in your existing ECS cluster follow below mentioned URL:\n\nhttps://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-cluster.html#deploy-container-insights-ECS-existing." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = settings.backupConfiguration.enabled is false and instanceType is not member of (""READ_REPLICA_INSTANCE"",""ON_PREMISES_INSTANCE"")```","GCP SQL database instance is not configured with automated backups This policy identifies the GCP SQL database instances that are not configured with automated backups. Automated backups need to be set for any instance that contains data that should be protected from loss or damage. It is recommended to have all SQL database instances set to enable automated backups. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to 'SQL'\n3. Click on the reported SQL instance\n4. From the left menu go to 'Backups'\n5. Go to section 'Settings', click on 'EDIT'\n6. From the pop-up window 'Edit backups settings' click on 'Automated backups'\n7. Provide a time window from the available dropdown\n8. Click on 'Save'\n\n." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireSymbols equals null or requireSymbols is false or requireSymbols does not exist'```,"AWS IAM password policy does not have a symbol Checks to ensure that IAM password policy requires a symbol. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one non-alphanumeric character'.\n4. Click on 'Apply password policy'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isPythonVersionLatest exists and config.isPythonVersionLatest equals false'```,"Azure App Service Web app doesn't use latest Python version This policy identifies App Service Web apps that are not configured with latest Python version. Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. It is recommended to use the latest Python version for web apps in order to take advantage of security fixes, if any. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to Python and Minor version is set to latest version.\n6. Click on 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/delete"" as X; count(X) less than 1```","Azure Activity log alert for Delete network security group does not exist This policy identifies the Azure accounts in which activity log alert for Delete network security group does not exist. Creating an activity log alert for the Delete network security group gives insight into network access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Network Security Group (Microsoft.Network/networkSecurityGroups)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals ""ACTIVE"" and ( metadata.proxy-mode equals ""mail"" or metadata.proxy-user-mail exists )```","GCP Vertex AI Workbench user-managed notebook's JupyterLab interface access mode is set to single user This policy identifies GCP Vertex AI Workbench user-managed notebooks with JupyterLab interface access mode set to single user. Vertex AI Workbench user-managed notebook can be accessed using the web-based JupyterLab interface. Access mode controls the control access to this interface. Allowing access to only a single user could limit collaboration, increase chances of credential sharing, and hinder security audits and reviews of the resource. It is recommended to avoid single user access and make use of the service account access mode for user-managed notebooks. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Access mode cannot be changed for existing Vertex AI Workbench user-managed notebooks. A new Vertex AI Workbench user-managed notebook should be created.\n\nTo create a new Vertex AI Workbench user-managed notebook with access mode set to service account, please refer to the steps below:\n1. Login to the GCP console\n2. Under 'Vertex AI', navigate to the 'Workbench' (Left Panel)\n3. Select 'USER-MANAGED NOTEBOOKS' tab\n4. Click 'CREATE NEW'\n5. Click 'ADVANCED OPTIONS'\n6. Configure the instance as required\n7. Go to 'IAM and security' tab\n8. Select 'Service account'\n9. Click 'CREATE'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_temp_files')] does not exist or settings.databaseFlags[?(@.name=='log_temp_files')].value does not equal 0)""```","GCP PostgreSQL instance database flag log_temp_files is not set to 0 This policy identifies PostgreSQL database instances in which database flag log_temp_files is not set to 0. The log_temp_files flag controls the logging of names and size of temporary files. Configuring log_temp_files to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. A value of -1 disables temporary file information logging. If all temporary files are not logged, it may be more difficult to identify potential performance issues that may be either poor application coding or deliberate resource starvation attempts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_temp_files' from the drop-down menu and set the value as '0'\nOR\nIf the flag has been set to other than 0, Under 'Configuration options', In 'Flags' section choose the flag 'log_temp_files' and set the value as '0'\n6. Click Save." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE```,"Copy of PCSUP-16458-CLI-Test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and Action contains sts:* and Resource equals * and Condition does not exist)] exists```,"AWS IAM policy overly permissive to STS services This policy identifies the IAM policies that are overly permissive to STS services. AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). It is recommended to follow the principle of least privileges ensuring that only restricted STS services for restricted resources. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service\n3. Click on the 'Policies' in left hand panel and Click on the reported IAM policy\n4. Under Permissions tab, Change the element of the policy document to be more restrictive so that it only allows restricted STS permissions on selected resources instead of wildcards (sts:* and Resource:*) OR Put condition statement with least privilege access.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-ssh-public-keys' AND json.rule = '(_DateTime.ageInDays($.uploadDate) > 91) and status==Active'```,"AWS IAM SSH keys for AWS CodeCommit have aged more than 90 days without being rotated This policy identifies all of your IAM SSH public keys which haven't been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect your AWS CodeCommit repositories. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console\n2. Goto IAM and select Users\n3. Choose the reported user\n4. Goto Security Credential\n5. Delete the SSH Key ID and upload a new SSH Key\nKey creation steps: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-ssh-unixes.html." ```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowSharedKeyAccess is true and properties.sasPolicy does not exist```,"Azure Storage account not configured with SAS expiration policy This policy identifies Azure Storage accounts not configured with SAS expiration policy. A Shared Access Signature (SAS) expiration policy specifies a recommended interval over which the SAS is valid. SAS expiration policies apply to a service SAS or an account SAS. When a user generates service SAS or an account SAS with a validity interval that is larger than the recommended interval, they'll see a warning. If Azure Storage logging with Azure Monitor is enabled, then an entry is written to the Azure Storage logs. It is recommended that you limit the interval for a SAS in case it is compromised. For more details: https://learn.microsoft.com/en-us/azure/storage/common/sas-expiration-policy This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure an expiration policy for shared access signatures for the reported Storage account, follow bellow URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/sas-expiration-policy." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ""ACTIVE"" AND environment equals ""GEN_1"" AND serviceConfig.securityLevel exists AND serviceConfig.securityLevel does not equal ""SECURE_ALWAYS""```","GCP Cloud Function v1 is using unsecured HTTP trigger This policy identifies GCP Cloud Functions v1 that are using unsecured HTTP trigger. Using HTTP triggers for cloud functions poses significant security risks, including vulnerability to interception, tampering, and various attacks like man-in-the-middle. Conversely, HTTPS triggers provide encrypted communication, safeguarding sensitive data and ensuring confidentiality. HTTPS also supports authentication mechanisms, enhancing overall security and trust. It is recommended to enable 'Require HTTPS' for HTTP triggers for all cloud functions v1. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Under section 'Trigger', click on 'EDIT' for HTTP trigger\n6. Select the checkbox against the field 'Require HTTPS'\n7. Click on 'SAVE'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'." "```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains ""compute@developer.gserviceaccount.com"" and roles[*] contains ""roles/editor"" as X; config from cloud.resource where api.name = 'gcloud-cloud-run-services-list' AND json.rule = spec.template.spec.serviceAccountName contains ""compute@developer.gserviceaccount.com"" as Y; filter ' $.X.user equals $.Y.spec.template.spec.serviceAccountName '; show Y; ```","GCP Cloud Run service is using default service account with editor role This policy identifies GCP Cloud Run services that are utilizing the default service account with the editor role. When you create a new Cloud Run service, the compute engine default service account is associated with the service by default if any other service account is not configured. The compute engine default service account is automatically created when the Compute Engine API is enabled and is granted the IAM basic Editor role if you have not disabled this behavior explicitly. These permissions can be exploited to get admin access to the GCP project. To be compliant with the principle of least privileges and prevent potential privilege escalation, it is recommended that Cloud Run services are not assigned the 'Compute Engine default service account' especially when the editor role is granted to the service account. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is not possible to change service account of an existing revision of cloud run service. To update the service account used, a new revision can be deployed.\n\nTo deploy a new service with a user-managed service account, please refer to the URLs given below:\nhttps://cloud.google.com/run/docs/securing/service-identity#deploying_a_new_service_with_a_user-managed_service_account." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and name does not start with ""gke-"" and (shieldedInstanceConfig does not exist or shieldedInstanceConfig.enableSecureBoot is false )```","GCP VM instance with Shielded VM Secure Boot disabled This policy identifies GCP VM instances that have Shielded VM Secure Boot disabled. Secure Boot is a security feature that ensures only trusted, digitally signed software runs during the boot process of a computer. Enabling it helps protect against malware and unauthorized software by verifying the integrity of the bootloader and operating system. Without Secure Boot, systems are vulnerable to rootkits, bootkits, and other malicious code that can compromise the system from the start, making it difficult to detect and remove such threats. It is recommended to enable Shielded VM secure boot for GCP VM instances. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to 'Compute Engine' and then 'VM instances'\n3. Click on the reported VM name\n4. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue\n5. Once the the VM has been stopped, click on the 'EDIT' button\n6. Under 'Shielded VM', enable 'Turn on Secure Boot'\n7. Click on 'Save'\n8. Click on 'START/RESUME' from the top menu.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = shieldedNodes.enabled does not exist or shieldedNodes.enabled equals ""false""```","GCP Kubernetes cluster Shielded GKE Nodes feature disabled This policy identifies GCP Kubernetes clusters for which the Shielded GKE Nodes feature is not enabled. Shielded GKE nodes protect clusters against boot- or kernel-level malware or rootkits which persist beyond infected OS. It is recommended to enable Shielded GKE Nodes for all the Kubernetes clusters. FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-nodes This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Browse the alerted cluster\n4. Click on the 'Edit' button on top\n5. From the drop-down for 'Shielded GKE Nodes' select 'Enable'\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.emailSubscriptionAdmins is false```,"Azure SQL Server ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' is disabled This policy identifies Azure SQL Server which has ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' disabled. This setting enables ADS - VA scan reports being sent to admins and subscription owners. It is recommended to enable 'Also send email notifications to admins and subscription owners' setting, which would help in reducing time required for identifying risks and taking corrective measures. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. In 'VULNERABILITY ASSESSMENT SETTINGS' section, Ensure 'Also send email notifications to admins and subscription owners' is checked\n6. 'Save' your changes." ```config from cloud.resource where cloud.type = 'azure' AND cloud.accountgroup NOT IN ( 'PCF Azure') AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].['osDisk'].['vhd'].['uri'] exists```,"RomanPolicy This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with ""gke-"") and serviceAccounts[*].email contains ""-compute@developer.gserviceaccount.com"" and serviceAccounts[*].scopes[*] any equal ""https://www.googleapis.com/auth/cloud-platform""```","GCP VM instance using a default service account with Cloud Platform access scope This policy identifies the GCP VM instances that are using a default service account with cloud-platform access scope. To compliant with the principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account 'Compute Engine default service account' with scope 'cloud-platform'. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP portal\n2. Go to Compute Engine\n3. Choose VM instances\n4. Click on the reported VM instance for which you want to change the service account\n5. If the instance is not stopped, click the 'Stop' button. Wait for the instance to be stopped\n6. Next, click the 'Edit' button\n7. Scroll down to the 'Service Account' section, From the drop-down menu, select the desired service account.\n8. Ensure 'Allow full access to all Cloud APIs' is not selected or 'Cloud Platform' under 'Set access for each API' is not enabled\n9. Click the 'Save' button and then click 'START' to start the VM instance.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.properties.subnet.id does not exist```,"Azure Machine Learning compute instance not configured inside virtual network This policy identifies Azure Machine Learning compute instances that are not configured within a virtual network. Azure Machine Learning compute instances outside a Virtual Network are exposed to external threats, as they may be publicly accessible. Placing the instance within a Virtual Network improves security by limiting access to trusted virtual machines and services within the same network. This ensures secure communication and blocks unauthorized public access. As a security best practice, it is recommended to deploy the Azure Machine Learning compute instances inside a virtual network. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Configuring an existing Azure Machine Learning compute instance inside a Virtual Network without deleting and recreating it is not supported. To ensure security, it is recommended to set up the compute instance within a Virtual Network from the start.\n\nTo create a new compute instance inside a Virtual Network:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Manage' section, click on the 'Compute'\n7. Click 'New' to create a new compute instance\n8. In the 'Security' tab, under the 'Virtual network' section, enable the 'Enable virtual network' to configure it within a Virtual network\n9. Select 'Review + Create' to create the compute instance." ```config from cloud.resource where api.name = 'aws-waf-classic-global-web-acl-resource' as X; config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as Y; filter '$.X.webACL.webACLId equals $.Y.webACLId'; show Y;```,"AWS CloudFront not configured with AWS Web Application Firewall v2 (AWS WAFv2) This policy identifies AWS CloudFront which is not configured with AWS Web Application Firewall v2 (AWS WAFv2). As a best practice, configure the AWS WAFv2 service on the CloudFront to protect against application-layer attacks. To block malicious requests to your CloudFront, define the block criteria in the WAFv2 web access control list (web ACL). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. On 'Edit Distribution' page, from 'AWS WAF Web ACL' dropdown select WAFv2 ACL which you want to apply\nNote: In case no WAFv2 ACL found from 'AWS WAF Web ACL' dropdown list, Please follow below URL to create WAFv2 ACL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/web-acl-creating.html\n6. Click on 'Save changes'." "```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains ""aws:kms"" or sseAlgorithm contains ""aws:kms:dsse"") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals ""null"")as Y; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn'; show X;```","AWS S3 bucket encrypted with Customer Managed Key (CMK) is not enabled for regular rotation This policy identifies Amazon S3 buckets that use Customer Managed Keys (CMKs) for encryption but are not enabled with key rotation. Amazon S3 bucket encryption key rotation failure can result in prolonged exposure of sensitive data and potential compliance violations. As a security best practice, it is important to rotate these keys periodically. This ensures that if the keys are compromised, the data in the underlying service remains secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Recommendation:\n\nThe following steps are recommended to enable the automatic rotation of the KMS key used by the S3 bucket\n\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Under the 'Key rotation' tab on the navigated KMS key window, Enable 'Automatically rotate this CMK every year'.\n6. Click on Save.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(4333,4333) or destinationPortRanges[*] contains _Port.inRange(4333,4333) ))] exists```","Azure Network Security Group allows all traffic on MSQL (TCP Port 4333) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on MSQL (TCP Port 4333). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MSQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and (acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (WriteAcp,Write,FullControl))] exists or acl.grantsAsList[?any(grantee equals AuthenticatedUsers and permission is member of (WriteAcp,Write,FullControl))] exists)) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Put or Action contains s3:Create or Action contains s3:Replicate or Action contains s3:Update or Action contains s3:Delete) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```","AWS S3 bucket publicly writable This policy identifies the S3 buckets that are publicly writable by Put/Create/Update/Replicate/Write/Delete bucket operations. These permissions permit anyone, malicious or not, to Put/Create/Update/Replicate/Write/Delete bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Under 'Access Control List', Click on 'Authenticated users group' and uncheck all items\nc. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to PUT/CREATE/REPLICATE/DELETE objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific PUT/CREATE/REPLICATE/DELETE functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5900,5900) or destinationPortRanges[*] contains _Port.inRange(5900,5900) ))] exists```","Azure Network Security Group allows all traffic on VNC Server (TCP Port 5900) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on VNC Server (TCP Port 5900). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict VNC Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_statement_stats or settings.databaseFlags[?any(name contains log_statement_stats and value contains on)] exists)""```","GCP PostgreSQL instance database flag log_statement_stats is not set to off This policy identifies PostgreSQL database instances in which database flag log_statement_stats is not set to off. The log_statement_stats flag enables a crude profiling method for logging end-to-end performance statistics of a SQL query. This can be useful for troubleshooting but may increase the number of logs significantly and have performance overhead. It is recommended to set log_statement_stats as off. Note: The flag 'log_statement_stats' cannot be enabled with other module statistics (log_parser_stats, log_planner_stats, log_executor_stats). This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_statement_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_statement_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/publicIPAddresses/write"" as X; count(X) less than 1```","Azure Activity log alert for Create or update public IP address rule does not exist This policy identifies the Azure accounts in which activity log alert for Create or update public IP address rule does not exist. Creating an activity log alert for create or update public IP address rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. By enabling this monitoring, you get alerts whenever any changes are made to public IP address rules. As a best practice, it is recommended to have a activity log alert for create or update public IP address rule to enhance network security monitoring and detect suspicious activities. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Public Ip Address (Public Ip Address)' and Other fields you can set based on your custom settings.\n6. Click on Create." ```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' and json.rule = blockDeviceMappings[*].ebs.encrypted exists and blockDeviceMappings[*].ebs.encrypted is false```,"Enforce EBS Volume Encryption in EC2 Auto Scaling Configurations This policy helps ensure that your AWS EC2 Auto Scaling Launch Configurations are using encrypted EBS volumes, which is a crucial security measure to protect sensitive data. By checking for the presence of the Encrypted field and verifying that it is set to false, the policy alerts you to any instances where encryption is not enabled, allowing you to take corrective action and maintain a secure cloud environment. Adhering to this policy helps you comply with best practices and regulatory requirements for data protection in your public cloud deployment. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.addonProfiles.httpapplicationrouting.enabled is true or properties.addonProfiles.httpApplicationRouting.enabled is true```,"Azure AKS cluster HTTP application routing enabled HTTP application routing configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints. While this makes it easy to access applications that are deployed to your Azure AKS cluster, this add-on is not recommended for production use. This policy checks your AKS cluster HTTP application routing add-on setting and alerts if enabled. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To disable HTTP application routing for your AKS cluster, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/http-application-routing#remove-http-routing." ```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-group-settings' and json.rule = values[?any( name equals LockoutThreshold and (value greater than 10 or value does not exist))] exists```,"Azure Microsoft Entra ID account lockout threshold greater than 10 This policy identifies if the account lockout threshold for Microsoft Entra ID (formerly Azure AD) accounts is configured to allow more than 10 failed login attempts before the account is locked out. A high lockout threshold (greater than 10) increases the risk of brute-force or password spray attacks, where attackers can attempt multiple passwords over time without triggering account lockouts, leaving accounts vulnerable to unauthorized access. Setting the lockout threshold to a reasonable value (e.g., less than or equal to 10) balances usability and security by limiting the number of login attempts before an account is locked, reducing exposure to attacks while preventing frequent unnecessary lockouts for legitimate users. As a security best practice, it is recommended to configure the account lockout threshold to less than or equal to 10. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under Manage, select Security\n4. Under Manage, select Authentication methods\n5. Under Manage, select Password protection\n6. Set the 'Lockout threshold' to 10 or fewer\n7. Click 'Save'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equal ignore case Succeeded AND (properties.enableRbacAuthorization does not exist or properties.enableRbacAuthorization is false)```,"Azure Key Vault Role Based Access control is disabled This policy identifies Azure Key Vault instances where Role-Based Access Control (RBAC) is not enabled. Without RBAC, managing access is less secure and can lead to improper access permissions, increasing the risk of unauthorized access to sensitive data. RBAC provides finer-grained access control, enabling secure and manageable permissions for key vault secrets, keys, and certificates. This allows for detailed permissions and the use of privileged identity management for enhanced security with Just-In-Time (JIT) access management. As best practice, it is recommended to enable RBAC for all Azure Key Vaults to ensure secure and manageable access control. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren’t assigned.\n\n1. Login to the Azure portal\n2. Select ‘All services’ > ‘Key vaults’\n3. Select the reported Key vault\n4. Select ‘Access configuration’ under the ‘Settings’ section\n5. Select ‘Azure role-based access control’ under ‘Permission model’ and click ‘Apply’ at the bottom of the page\n6. Next assign a Role to grant access to the Key vault\n - Select ‘Access control (IAM)’ from the left panel\n - Open the ‘Add role assignment’ pane\n - Select the appropriate role under ‘Role’ (e.g., ‘Key Vault Contributor’)\n - Assign the role to a user, group, or application by searching for the name or ID under ‘Select members’\n - Click 'Review + Assign' to apply the changes." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(10255,10255) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp or IPProtocol contains ""all"")))] exists as X; config from cloud.resource where api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING as Y; filter '$.X.network contains $.Y.networkConfig.network' ; show X;```","GCP Firewall rule exposes GKE clusters by allowing all traffic on read-only port (10255) This policy identifies GCP Firewall rule allowing all traffic on read-only port (10255) which exposes GKE clusters. In GKE, Kubelet exposes a read-only port 10255 which shows the configurations of all pods on the cluster at the /pods API endpoint. GKE itself does not expose this port to the Internet as the default project firewall configuration blocks external access. However, it is possible to inadvertently expose this port publicly on GKE clusters by creating a Google Compute Engine VPC firewall for GKE nodes that allows traffic from all source ranges on all the ports. This configuration publicly exposes all pod configurations, which might contain sensitive information. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: As port 10255 exposes sensitive information of GKE pod configuration it is recommended to disable this firewall rule. \nOtherwise, remove the overly permissive source IPs following below steps,\n\n1. Login to GCP Console\n2. Navigate to 'VPC Network'(Left Panel)\n3. Go to the 'Firewall' section (Left Panel)\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5500,5500) or destinationPortRanges[*] contains _Port.inRange(5500,5500) ))] exists```","Azure Network Security Group allows all traffic on VNC Listener (TCP Port 5500) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on VNC Listener TCP port 5500. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict VNC Listener solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.." ```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-acm-describe-certificate' as Y; filter '($.X.description.listenerDescriptions[*].listener.sslcertificateId contains $.Y.certificateArn and ((_DateTime.ageInDays($.Y.notAfter) > -90 and (_DateTime.ageInDays($.Y.notAfter) < 0 or _DateTime.ageInDays($.Y.notAfter) == 0)) or (_DateTime.ageInDays($.Y.notAfter) > 0)))'; show X;```,"AWS Elastic Load Balancer (ELB) with ACM certificate expired or expiring in 90 days This policy identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it is recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Reimport certificate'\n6. On the 'Import a certificate' page:\n6a. For 'Certificate body*', paste the PEM-encoded certificate to import\n6b. For 'Certificate private key*', paste the PEM-encoded, unencrypted private key that matches the SSL/TLS certificate public key\n6c. (Optional) For 'Certificate chain', paste the PEM-encoded certificate chain delivered\n6d. Click Review and import button to continue the process\n7. On the 'Review and import' page, review the imported certificate details then click on 'Import'." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.ipAllowlist does not exist or properties.ipAllowlist is empty) and properties.hbiWorkspace is true```,"Azure Machine learning workspace configured with high business impact data have unrestricted network access This policy identifies Azure Machine learning workspaces configured with high business impact data with unrestricted network access. Overly permissive public network access allows access to resource through the internet using a public IP address and that resource having High Business Impact (HBI) data could lead to sensitive data exposure. As a best practice, it is recommended to limit access to your workspace and endpoint to specific public internet IP addresses, ensuring that only authorized entities can access them according to business requirements. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To restirct internet IP ranges on your existing Machine learning workspace, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link?view=azureml-api-2&tabs=azure-portal#enable-public-access-only-from-internet-ip-ranges-preview." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = rotationEnabled is true and owningService is not member of (appflow, databrew, datasync, directconnect, events, opsworks-cm, rds, sqlworkbench) and rotationRules.automaticallyAfterDays exists and rotationRules.automaticallyAfterDays greater than 90```","AWS Secrets Manager secret not configured to rotate within 90 days This policy identifies the AWS Secrets Manager secret is not configured to automatically rotate the secret within 90 days. Rotating secrets minimizes the risk of compromised credentials and reduces exposure to potential threats. Failing to rotate secrets increases the risk of security breaches and prolonged exposure to threats. It is recommended to configure automatic rotation in AWS Secrets Manager to replace long-term secrets with short-term ones, reducing the risk of compromise. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set up automatic rotation for Amazon RDS, Amazon Aurora, Amazon Redshift, or Amazon DocumentDB secrets, refer to the below link:\n\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html\n\nTo set up automatic rotation for non-database AWS Secrets Manager secrets, refer to the below link:\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-organization-asset-group-member' as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/editor or roles[*] contains roles/owner or roles[*] contains roles/appengine.* or roles[*] contains roles/browser or roles[*] contains roles/compute.networkAdmin or roles[*] contains roles/cloudtpu.serviceAgent or roles[*] contains roles/composer.serviceAgent or roles[*] contains roles/composer.ServiceAgentV2Ext or roles[*] contains roles/container.serviceAgent or roles[*] contains roles/dataflow.serviceAgent)' as Y; filter '($.X.groupKey.id contains $.Y.user)'; show Y;```,"pcsup-13966-ss-policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any((name equals SqlServers and properties.pricingTier does not equal Standard) or (name equals CosmosDbs and properties.pricingTier does not equal Standard) or (name equals OpenSourceRelationalDatabases and properties.pricingTier does not equal Standard) or (name equals SqlServerVirtualMachines and properties.pricingTier does not equal Standard))] exists```,"Azure Microsoft Defender for Cloud set to Off for Databases This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Databases set to Off. Enabling Azure Defender for Cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Defender for Databases in Microsoft Defender for Cloud allows you to protect your entire database estate with attack detection and threat response for the most popular database types in Azure. It is highly recommended to enable Azure Defender for Databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Set 'Databases' Status to 'On'\n7. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-neptune-db-cluster' AND json.rule = Status contains available and DeletionProtection is false```,"AWS Neptune cluster deletion protection is disabled This policy identifies AWS Neptune clusters for which deletion protection is disabled. Enabling deletion protection for Neptune clusters prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Neptune Dashboard\n4. Select the reported Neptune cluster\n5. Click on 'Modify' from top\n6. Under 'Deletion protection' select 'Enable deletion protection'\n7. Click on 'Continue'\n8. Schedule the modifications and click on 'Modify cluster' \n ." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains MYSQL and (settings.databaseFlags[?(@.name=='local_infile')] does not exist or settings.databaseFlags[?(@.name=='local_infile')].value equals on)""```","GCP MySQL instance with local_infile database flag is not disabled This policy identifies MySQL instances in which local_infile database flag is not disabled. The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Based on the settings in local_infile server refuses or permits local data loading by clients. Disabling the local_infile flag setting, would disable the local data loading by clients that have LOCAL enabled on the client side. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Select the MySQL instance for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. Click 'Add item', choose the flag 'local_infile' from the drop-down menu and set the value to 'Off'\nOR\nIf 'local_infile' database flag is already set to 'On', from the drop-down menu set the value to 'Off'\n7. Click on 'Save'." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[*].Principal.AWS exists and policy.Statement[*].Effect contains ""Allow""```","priyanka tst This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_attack_path_policy_as_child_policies_ss_finding_2 Description-27d6b8cf-e576-4828-b0eb-0c0627c2e05f This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = 'statusEvents[?any(_DateTime.ageInDays(notBefore) > -7 and (_DateTime.ageInDays(notBefore) < 0 or (description exists and description does not contain ""Completed"")))] exists'```","AWS EC2 Instance Scheduled Events This policy identifies your Amazon EC2 instances which have a scheduled event. AWS can schedule events for your instances, such as a reboot, stop/start, or retirement. These events do not occur frequently. If one of your instances will be affected by a scheduled event, AWS sends an email to the email address that’s associated with your AWS account prior to the scheduled event, with details about the event, including the start and end date. Depending on the event, you might be able to take action to control the timing of the event. If AWS scheduled event is planned for within 7 days, this signature triggers an alert. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate this alert, review and follow the steps at AWS: Scheduled Events for Your Instances as needed.\nFor more info: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_connections')] does not exist or settings.databaseFlags[?(@.name=='log_connections')].value equals off)""```","GCP PostgreSQL instance database flag log_connections is disabled This policy identifies PostgreSQL type SQL instances for which the log_connections database flag is disabled. PostgreSQL does not log attempted connections by default. Enabling the log_connections setting will create log entries for each attempted connection as well as successful completion of client authentication which can be useful in troubleshooting issues and to determine any unusual connection attempts to the server. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the PostgreSQL instance ID for which you want to enable the database flag from the list\n4. Click on 'Edit'\nNOTE: If the instance is stopped, You need to START the instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Customize your instance'\n6. To set a flag that has not been set on the instance before, click 'Add FLAG', choose the flag 'log_connections' from the drop-down menu and set the value as 'on'.\n7. If it is already set to 'off' for 'log_connections', from the drop-down menu set the value as 'on'\n8. Click on 'DONE' for the added/edited flag.\n9. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and policyName contains AWSCloudShellFullAccess and (entities.policyRoles[*].roleName exists or entities.policyUsers[*].userName exists or entities.policyGroups[*].groupName exists)```,"AWS IAM AWSCloudShellFullAccess policy is attached to IAM roles, users, or IAM groups This policy identifies the AWSCloudShellFullAccess policy attached to IAM roles, users, or IAM groups. AWS CloudShell is a convenient way of running CLI commands against AWS services. The 'AWSCloudShellFullAccess' IAM policy, providing unrestricted CloudShell access, poses a risk of data exfiltration, allowing malicious admins to exploit file upload/download capabilities for unauthorized data transfer. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the IAM console at https://console.aws.amazon.com/iam/\n2. In the left pane, select Policies\n3. Search for and select AWSCloudShellFullAccess\n4. On the Entities attached tab, for each item, check the box and select Detach." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'```,"Bobby run and build This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action equals lambda:* or Action[*] equals lambda:*) and (Resource equals * or Resource[*] equals *) and Condition does not exist)] exists```,"AWS IAM policy overly permissive to Lambda service This policy identifies the IAM policies that are overly permissive to Lambda service. AWS provides serverless computational functionality through their Lambda service. Serverless functions allow organizations to run code for applications or backend services without provisioning virtual machines or management servers. It is recommended to follow the principle of least privileges, ensuring that only restricted Lambda services for restricted resources. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service\n3. Click on the 'Policies' in left hand panel and Click on the reported IAM policy\n4. Under Permissions tab, Change the element of the policy document to be more restrictive so that it only allows restricted Lambda permissions on selected resources instead of wildcards (Lambda:* and Resource:*) OR Put condition statement with least privilege access.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case ""/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace""```","test again - delete it This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-composer-environment' AND json.rule = state equals ""RUNNING"" and config.webServerNetworkAccessControl.allowedIpRanges[?any( value equals ""0.0.0.0/0"" or value equals ""::0/0"" )] exists ```","GCP Composer environment web server network access control allows access from all IP addresses This policy identifies GCP Composer environments with web server network access control that allows access from all IP addresses. Web server network access control defines which IP addresses will have access to the Airflow web server. By default, web server network access control is set to allow all connections from the public internet. Allowing all traffic to the composer environment may allow a bad actor to brute force their way into the system and potentially get access to the entire network. As a best practice, restrict traffic solely from known IP addresses and limit access to known hosts, services, or specific entities only. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure web server network access of an existing Cloud Composer 1 and Cloud Composer 2 environments, follow the steps given below:\n1. Login to the GCP console\n2. Navigate to the 'Composer' service (Left Panel)\n3. Click on the alerting composer environment\n4. Click on the 'ENVIRONMENT CONFIGURATION' tab\n5. Under 'Network configuration', click on the 'EDIT' button for the 'Web server access control' setting\n6. Select 'Allow access only from specific IP addresses'\n7. Add the desired IPs and IP ranges to be allowed.\n8. Click the 'Save' button.\n\nTo configure web server network access of a new Cloud Composer 1 environment, please refer to the URLs given below:\nhttps://cloud.google.com/composer/docs/how-to/managing/creating#web-server-access\n\nTo configure web server network access of a new Cloud Composer 2 environment, please refer to the URLs given below:\nhttps://cloud.google.com/composer/docs/composer-2/create-environments#web-server-access\n\nNote: Cloud Composer 1 is nearing the end of support. The creation of new Cloud Composer 1 environments might be restricted. Further, updates to the existing Cloud Composer 1 environment may be restricted. In such cases, it is recommended to migrate to Cloud Composer 2. To migrate to Cloud Composer 2, please refer to the URLs given below and configure web server network access to limit the access for the new environment:\nhttps://cloud.google.com/composer/docs/migrate-composer-2-snapshots-af-2." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'```,"Copy 2 of Bobby Copy of AWS Access logging not enabled on S3 buckets Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = '((_DateTime.ageInDays($.properties.updatedOn) < 60) and (properties.principalType contains User))'```,"llatorre - RoleAssigment v5 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.networkConfig.enableIntraNodeVisibility does not exist or $.networkConfig.enableIntraNodeVisibility is false```,"GCP Kubernetes cluster intra-node visibility disabled With Intranode Visibility, all network traffic in your cluster is seen by the Google Cloud Platform network. This means you can see flow logs for all traffic between Pods, including traffic between Pods on the same node. And you can create firewall rules that apply to all traffic between Pods. This policy checks your cluster's intra-node visibility feature and generates an alert if it's disabled. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Upgrade your cluster to use Intranode Visibility.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. Select Enabled under Intranode visibility.\n4. Click Save to modify the cluster.." "```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-neptune-db-cluster' AND json.rule = Status equals ""available"" and (BackupRetentionPeriod does not exist or BackupRetentionPeriod less than 7)```","AWS Neptune DB clusters have backup retention period less than 7 days This policy identifies Amazon Neptune DB clusters lacking sufficient backup retention tenure. AWS Neptune DB is a fully managed graph database service. The backup retention period denotes the duration for storing automated backups of the Neptune DB clusters. Inadequate retention periods heighten the risk of data loss, and compliance issues, and hinder effective recovery in security breaches or system failures. It is recommended to ensure a backup retention period of at least 7 days or according to your business and compliance requirements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify an Amazon Neptune DB cluster's backup retention period, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, under 'Database', select 'Neptune'\n4. Under ‘Databases', select 'Clusters' and choose the reported cluster name\n5. Click 'Modify' from the top right corner \n6. Under the 'Additional settings' section, Click the 'Show more' dropdown \n7. Select the desired backup retention period in days from the 'Backup retention period' drop-down menu based on your business or compliance requirements \n8. Click 'Next' to review the summary of your changes \n9. Choose either 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your scheduling preference\n10. Click on 'Submit' to implement the changes." "```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = ""permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(3389,3389)""```","Alibaba Cloud Security group allow internet traffic to RDP port (3389) This policy identifies Security groups that allow inbound traffic on RDP port (3389) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 3389, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status contains VALIDATION_TIMED_OUT or status contains FAILED'```,"AWS Certificate Manager (ACM) has invalid or failed certificate This policy identifies certificates in ACM which are either in Invalid or Failed state. If the ACM certificate is not validated within 72 hours, it becomes Invalid. An ACM certificate fails when, - the certificate is requested for invalid public domains - the certificate is requested for domains which are not allowed - missing contact information - typographical errors In such cases (Invalid or Failed certificate), you will have to request for a new certificate. It is strongly recommended to delete the certificates which are in failed or invalid state. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete Certificates: \n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'resourceLabels does not exist or resourceLabels.[*] is empty'```,"GCP Kubernetes Engine Clusters without any label information This policy identifies all Kubernetes Engine Clusters which do not have labels. Having a cluster label helps you identify and categorize Kubernetes clusters. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. From the list of clusters, choose the reported cluster\n5. Click on 'SHOW INFO PANEL' button\n6. Click on 'Add Label'\n7. Specify customized data for Key and Value\n8. Click on Save." ```config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = blobAuditPolicy.properties.state equals Disabled or blobAuditPolicy does not exist or blobAuditPolicy is empty as X; config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = serverBlobAuditingPolicy.properties.state equals Disabled or serverBlobAuditingPolicy does not exist or serverBlobAuditingPolicy is empty as Y; filter '$.X.blobAuditPolicy.id contains $.Y.sqlServer.name'; show X;```,"Azure SQL database auditing is disabled This policy identifies SQL databases in which auditing is set to Off. Database events are tracked by the Auditing feature and the events are written to an audit log in your Audit log destinations. This process helps you to monitor database activity, and get insight into anomalies that could indicate business concerns or suspected security violations. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended that you enable only server-level auditing and leave the database-level auditing disabled for all databases.\n\nTo enable auditing at server level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database and SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Select 'Auditing' under 'Security' section, and set the status to 'On' and choose any Audit log destinations.\n5. Click on 'Save'\n\nIt is recommended to avoid enabling both server auditing and database blob auditing together, unless:\nIf you want to use a different storage account, retention period or Log Analytics Workspace for a specific database or want to use for audit event types or categories for a specific database that differ from the rest of the databases on the server.\nTo enable auditing at database level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database\n3. Select 'SQL databases', Click on the SQL database instance you wanted to modify\n4. Select 'Auditing' under 'Security' section, and set the status to 'On' and choose any Audit log destinations.\n5. Click on 'Save'." ```config from cloud.resource where api.name = 'aws-ecs-cluster' and json.rule = configuration.executeCommandConfiguration.logConfiguration.cloudWatchEncryptionEnabled exists and configuration.executeCommandConfiguration.logConfiguration.cloudWatchEncryptionEnabled is false```,"ECS Cluster CloudWatch Logs Encryption Disabled This policy alerts you when an AWS ECS cluster is configured with CloudWatch logs encryption disabled, potentially exposing sensitive information. By enforcing encryption on CloudWatch logs, you can enhance the security of your data and maintain compliance with regulatory requirements. Ensure that you enable encryption for CloudWatch logs to protect your ECS cluster from unauthorized access and safeguard your critical information. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' and json.rule = terminationProtected exists and terminationProtected is false```,"EMR Cluster Termination Protection Enforcement This policy alerts you when an AWS Elastic MapReduce (EMR) cluster is configured without termination protection, which could potentially expose your cluster to accidental terminations or unauthorized changes. By enabling termination protection, you can safeguard your EMR clusters against unintended shutdowns and ensure the continuity of your data processing tasks, thereby enhancing the overall security and reliability of your cloud environment. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case ""access"" and roles[?any( role_id contains ""crn:v1:bluemix:public:iam::::role:Administrator"" )] exists and resources[?any( attributes[?any( name equal ignore case ""serviceName"" and value equal ignore case ""secrets-manager"" and operator is member of (""stringEquals"", ""stringMatch""))] exists and attributes[?any( name is member of (""region"",""resource"",""resourceGroupId"",""resourceType"",""serviceInstance""))] does not exist )] exists and subjects[?any( attributes[?any( name contains ""iam_id"" and value contains ""IBMid"")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```","IBM Cloud user with IAM policies provide administrative privileges for Secrets Manager service This policy identifies IBM Cloud users with administrator role permission for the Secrets Manager service. Users with admin access will be able to perform all platform tasks for Secrets Manager, including the creation, modification, and deletion of Secrets Manager service instances, as well as the assignment of access policies to other users. On Secret Manager, there is a chance that sensitive data might be exposed in the underlying service if a user with administrative rights is compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Secrets Manager' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 445 or fromPort == 445) or (toPort > 445 and fromPort < 445)))] exists)```,"AWS Security Group allows all ingress traffic on CIFS port (445) This policy identifies AWS Security groups that allow all traffic on port 445 used by Common Internet File System (CIFS). Common Internet File System (CIFS) is a network file-sharing protocol that allows systems to share files over a network. unrestricted CIFS access can expose your data to unauthorized users, leading to potential security risks. It is recommended to restrict CIFS port 445 access to only trusted networks to prevent unauthorized access and data breaches. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict the traffic on the security group to known IP/CIDR range, perform the following actions:\n\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. On the left-hand panel, click on 'Security Groups' under the 'Security' section \n4. Select the 'Security Group' that is reported.\n4. Click on the 'Edit Inbound Rule'.\n5. in the 'Edit inbound rules' window, remove or restric the CIDR to trusted IP on the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 445 (or range containing 445)\n6. Click 'Save rules' to save.\n\nNote: Before making any changes, please check the impact on your applications/services.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = deployment.platform_options.disk_encryption_key_crn is empty```,"IBM Cloud MySQL Database disk encryption is not enabled with customer managed keys This policy identifies IBM Cloud MySQL Databases with default disk encryption. Using customer managed keys will increase significant control where keys are managed by customers. It is recommended to use customer managed keys for disk encryption which provides customer control over the lifecycle of the keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: MySQL database disk encryption can be enabled with Customer managed keys only at the time of\ncreation.\n\nPlease use below link to provide MySQL service to KMS service authorization if not authorized already;\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-key-protect&interface=ui#granting-service-auth\n\nPlease use below link to provision a KMS instance with a key to use for encryption if not provisioned:\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial#create-keys\n\nPlease follow below steps to create a new MySQL deployment from backup of vulnerable MySQL deployment:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select MySQL database reported in the alert.\n3. In the left navigation pane, navigate to 'Backups and restore', under 'Available Backups' section click on 'Create backup' to get latest backup of the database.\n4. Under 'Available Backups' tab, click on three dots on the right corner of a row containing latest backup and click on 'Restore backup'.\n5. On create a new Database for MySQL from backup page, select all the configuration as per the requirement.\n6. Under 'Encryption' section, under 'KMS Instance' please select a KMS instance and a key from the instance to use for encryption.\n7. Click on 'Restore backup'.\n\nPlease follow below steps to delete the reported database deployment :\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list'.\n3. Select your deployment. Next, by using the stacked three-dot menu icon , choose Delete from the drop list.." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = '(managedBy does not exist or managedBy is empty) and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of (""EncryptionAtRestWithCustomerKey"", ""EncryptionAtRestWithPlatformAndCustomerKeys"")'```","Azure disk is unattached and is encrypted with the default encryption key instead of ADE/CMK This policy identifies the disks which are unattached and are encrypted with default encryption instead of ADE/CMK. Azure encrypts disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: If data stored in the disk is no longer useful, refer to Azure documentation to delete unattached data disks at:\nAPI: https://docs.microsoft.com/en-us/rest/api/compute/disks/delete\nCLI: https://docs.microsoft.com/en-us/cli/azure/disk?view=azure-cli-latest#az-disk-delete\n\nIf data stored in the disk is important, To enable SSE with Azure Disk Encryption [SSE with PMK+ADE] disk needs to be attached to VM.\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned. Once encryption is done, Un-attach the disk form the VM using azure portal / CLI.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#enable-on-an-existing-disk." ```config from cloud.resource where api.name = 'ibm-vpc-network-vpn-gateway' AND json.rule = status equal ignore case available as X; config from cloud.resource where api.name = 'ibm-vpc-network-vpn-ipsec-policy' AND json.rule = pfs equals disabled as Y; filter '$.X.connections[*].id contains $.Y.connections[*].id'; show X;```,"IBM Cloud VPN Connections for VPC has an IPsec policy that have Perfect Forward Secrecy (PFS) disabled This policy identifies IBM Cloud VPN Gateway with connections with IPsec policy that has Perfect Forward Secrecy disabled. Perfect Forward Secrecy is an encryption system that changes the keys used to encrypt and decrypt information frequently and automatically. This ensures that derived session keys are not compromised if one of the private keys is compromised in the future. It is recommended to enable Perfect Forward Secrecy. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'VPNs'\n3. Select the 'Site-to-site gateways' and select the gateway reported in the alert.\n4. In the Gateway 'Overview' page, under 'VPN connections', please note down the 'IPsec policy' name for each connection\n5. From left navigation pane select 'VPNs', under 'Site-to-site gateways' select 'IPsec policies'.\n6. Select required region, and perform below steps for all above noted down IPsec policies.\n7. For each policy click on 'elipsis' menu icon on the right and select 'Edit'.\n8. In 'Edit IPsec policy' page, slide the 'Perfect Forward Secrecy' feature to enabled.\n9. Click on 'Save'." ```config from cloud.resource where api.name = 'azure-recovery-service-backup-protected-item' AND json.rule = properties.workloadType equal ignore case VM as X; config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = powerState contains running as Y; filter 'not $.Y.id equal ignore case $.X.properties.virtualMachineId'; show Y;```,"Azure Virtual Machine not protected with Azure Backup This policy identifies Azure Virtual Machines that are not protected by Azure Backup. Without Azure Backup, VMs are at risk of data loss due to accidental deletion, corruption, or ransomware attacks. Unprotected VMs may also not comply with organizational data retention policies and regulatory requirements. As a best practice, it is recommended to configure Azure Backup for all VMs to ensure data protection and enable recovery options in case of unexpected failures or incidents. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Virtual machines'.\n2. Select 'Virtual machines'.\n3. Select the reported Virtual machine.\n4. Under 'Backup + disaster recovery' select 'Backup'.\n5. In the 'Backup' pane, select a 'Recovery Services vault'. If no vault exists, click 'Create new' to make a new vault.\n6. Choose the appropriate 'Policy sub type'. It's recommended to select 'Enhanced'.\n7. Next, select or create a 'Backup Policy' that defines when backups will run and how long they will be kept.\n8. From the 'Disks' dropdown, check all the disks you want to back up. Also, check the 'Include future disks' box to ensure new disks are automatically included.\n9. Click 'Enable Backup'.." "```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and ($.X.filterPattern does not contain ""userIdentity.type!="" or $.X.filterPattern does not contain ""userIdentity.type !="") and ($.X.filterPattern contains ""userIdentity.type ="" or $.X.filterPattern contains ""userIdentity.type="") and ($.X.filterPattern contains ""userIdentity.invokedBy NOT EXISTS"") and ($.X.filterPattern contains ""eventType!="" or $.X.filterPattern contains ""eventType !="") and ($.X.filterPattern contains root or $.X.filterPattern contains Root) and ($.X.filterPattern contains AwsServiceEvent) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```","AWS Log metric filter and alarm does not exist for usage of root account This policy identifies the AWS regions that do not have a log metric filter and alarm for usage of a root account. Monitoring for root account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce its use it. Failure to monitor root account logins may result in a lack of visibility into unauthorized use or attempts to access the root account, posing potential security risks to your AWS environment. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n{ $.userIdentity.type = ""Root"" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != ""AwsServiceEvent"" }\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster' AND json.rule = lifecycleState equal ignore case ACTIVE and endpointConfig exists and (endpointConfig.nsgIds does not exist or endpointConfig.nsgIds equal ignore case ""null"" or endpointConfig.nsgIds is empty)```","OCI Kubernetes Engine Cluster endpoint is not configured with Network Security Groups This policy identifies Kubernetes Engine Clusters endpoint that are not configured with Network Security Groups. Network security groups give fine-grained control of resources and help in restricting network access to your cluster node pools. It is recommended to restrict access to the Cluster node pools by configuring network security groups. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> Kubernetes Clusters (OKE)\n3. Click on the reported Kubernetes Clusters\n4. Click on 'Edit'\n5. On 'Edit cluster' page, Select the restrictive Network Security Group by selecting 'Use network security groups to control traffic' option under 'Kubernetes API server endpoint' section.\nNOTE: Before you update cluster endpoint with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement.\n6. Click on 'Save' button." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' as X; config from cloud.resource where api.name = 'oci-block-storage-volume-backup' as Y; filter 'not($.X.id equals $.Y.volumeId)'; show X;```,"OCI Block Storage Block Volume is not restorable This policy identifies the OCI Block Storage Volumes that are not restorable. It is recommended to have backups on each block volume, that the block volume can be restored during data loss events. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Block Volume Backups from the Resources pane\n5. Click on Create Block Volume Backup (To create the back up)." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Sql/servers/firewallRules/write"" as X; count(X) less than 1```","Azure Activity log alert for Create or update SQL server firewall rule does not exist This policy identifies the Azure accounts in which activity log alert for Create or update SQL server firewall rule does not exist. Creating an activity log alert for Create or update SQL server firewall rule gives insight into SQL server firewall rule access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create/Update server firewall rule (Microsoft.Sql/servers/firewallRules)' and Other fields you can set based on your custom settings.\n6. Click on Create." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_disconnections')] does not exist or settings.databaseFlags[?(@.name=='log_disconnections')].value equals off)""```","GCP PostgreSQL instance database flag log_disconnections is disabled This policy identifies PostgreSQL type SQL instances for which the log_disconnections database flag is disabled. Enabling the log_disconnections setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the PostgreSQL instance ID for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. To set a flag that has not been set on the instance before, click 'Add item', choose the flag 'log_disconnections' from the drop-down menu and set the value as 'on'.\n7. If it is already set to 'off' for 'log_disconnections', from the drop-down menu set the value as 'on'\n8. Click on 'Save'." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = lifecycleState equal ignore case ACTIVE and capabilities.canUseConsolePassword is true and isMfaActivated is false```,"Copy of OCI MFA is disabled for IAM users This policy identifies Identify Access Management (IAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your OCI user’s identity and complete the sign-in process. This is applicable to oci cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu\n3. Select Users from Identity menu.\n4. Click on each non-complaint user.\n5. Click on Enable Multi-Factor Authentication.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus equals SUCCESS and recordingGroup.allSupported is true' as X; count(X) less than 1```,"AWS Config Recording is disabled AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. AWS config uses configuration recorder to detect changes in your resource configurations and capture these changes as configuration items. It continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. This policy generates alerts when AWS Config recorder is not enabled. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Management Console\n2. Select the specific region from the top down, for which the alert is generated\n3. Navigate to service 'Config' from the 'Services' dropdown.\nIf AWS Config set up exists,\na. Go to Settings\nb. Click on 'Turn On' button under 'Recording is Off' section,\nc. provide required information for bucket and role with proper permission\nIf AWS Config set up doesn't exist\na. Click on 'Get Started'\nb. For Step 1, Tick the check box for 'Record all resources supported in this region' under section 'Resource types to record'\nc. Under section 'Amazon S3 bucket', select bucket with permission to Config services\nd. Under section 'AWS Config role', select a role with permission to Config services\ne. Click on 'Next'\nf. For Step 2, Select required rule and click on 'Next' otherwise click on 'Skip'\ng. For Step 3, Review the created 'Settings' and click on 'Confirm'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(445,445) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```","GCP Firewall rule allows all traffic on Microsoft-DS port (445) This policy identifies GCP Firewall rules which allow all inbound traffic on Microsoft-DS port (445). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the Microsoft-DS port (445) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'deleteAutoSnapshot is true'```,"Alibaba Cloud data disk is configured with delete automatic snapshots feature This policy identifies data disks that are configured with delete automatic snapshots feature. Disabling the delete automatic snapshots while releasing disk feature prevents the irreversible data loss from accidental or malicious operations. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Select the reported disk\n5. Select More and click on Modify Disk Property\n6. On Modify Disk Property popup window, Uncheck 'Delete Automatic Snapshots While Releasing Disk' checkbox\n7. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-code-build-project' AND json.rule = not(logsConfig.cloudWatchLogs.status equal ignore case enabled or logsConfig.s3Logs.status equal ignore case enabled)```,"AWS CodeBuild project not configured with logging configuration This policy identifies AWS CodeBuild project environments without a logging configuration. AWS CodeBuild is a fully managed service for building, testing, and deploying code. Logging is a crucial security feature that allows for future forensic work in the event of a security incident. Correlating abnormalities in CodeBuild projects with threat detections helps boost confidence in their accuracy. It is recommended to enable logging configuration on CodeBuild projects for monitoring and troubleshooting purposes. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console. Navigate to the CodeBuild service\n2. In the left navigation pane, select 'Build Projects' under 'Build'\n3. Go to your AWS CodeBuild project\n4. Select the 'Project details' tab, and under the 'Logs' section, select 'Edit'\n5. Under the 'Edit Logs' page, based on the requirement, select either 'CloudWatch logs' or 'S3 logs'\n6. For CloudWatch logging, provide a log group name\n7. For S3 logging, provide the bucket name\n8. Click on 'Update logs'.." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-app-engine-application' AND json.rule = servingStatus equals SERVING and (iap does not exist or iap.enabled does not exist or iap.enabled is false)```,"GCP App Engine Identity-Aware Proxy is disabled This policy identifies GCP App Engine applications for which Identity-Aware Proxy(IAP) is disabled. IAP is used to enforce access control policies for applications and resources. It works with signed headers or the App Engine standard environment Users API to secure your app. It is recommended to enable Identity-Aware Proxy for securing the App engine. Reference: https://cloud.google.com/iap/docs/concepts-overview This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: To enabled IAP for a GCP project follow the below steps provided,\n\nLink: https://cloud.google.com/iap/docs/app-engine-quickstart#enabling_iap." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = 'attributes.KmsMasterKeyId exists and attributes.KmsMasterKeyId contains alias/aws/sqs'```,"AWS SQS queue encryption using default KMS key instead of CMK This policy identifies SQS queues which are encrypted with default KMS keys and not with Customer Master Keys(CMKs). It is a best practice to use customer managed Master Keys to encrypt your SQS queue messages. It gives you full control over the encrypted messages data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to Simple Queue Service (SQS) dashboard\n4. Choose the reported Simple Queue Service (SQS)\n5. Click on 'Queue Actions' and Choose 'Configure Queue' from the dropdown \n6. On 'Configure' popup, Under 'Server-Side Encryption (SSE) Settings' section; Choose an 'AWS KMS Customer Master Key (CMK)' from the drop-down list or copy existing key ARN instead of (Default) alias/aws/sqs key.\n7. Click on 'Save Changes'." ```config from cloud.resource where api.name = 'aws-ec2-elastic-address' and resource.status = Deleted AND json.rule = domain exists```,"Moses Policy Test 3 Test 3 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = ""configurations.value[?(@.name=='connection_throttling')].properties.value equals OFF or configurations.value[?(@.name=='connection_throttling')].properties.value equals off""```","Azure PostgreSQL database server with connection throttling parameter is disabled This policy identifies PostgreSQL database servers for which server parameter is not set for connection throttling. Enabling connection_throttling helps the PostgreSQL Database to Set the verbosity of logged messages which in turn generates query and error logs with respect to concurrent connections, that could lead to a successful Denial of Service (DoS) attack by exhausting connection resources. A system can also fail or be degraded by an overload of legitimate users. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'connection_throttling' and set it to on\n6. Click on 'Save' button from top menu to save the change.." ```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or minimumPasswordLength < 14 or minimumPasswordLength does not exist'```,"AWS IAM password policy does not have a minimum of 14 characters Checks to ensure that IAM password policy requires minimum of 14 characters. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. In the 'Minimum password length' field, put 14 or more (As per preference).\n4. Click on 'Apply password policy'." "```config from cloud.resource where api.name = 'azure-virtual-desktop-session-host' AND json.rule = session-hosts[*] is not empty and session-hosts[*].properties.resourceId exists as X; config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case ""PowerState/running"" as Y; filter '$.X.session-hosts[*].properties.resourceId equal ignore case $.Y.id and ($.Y.identity does not exist or $.Y.identity.type equal ignore case None)'; show Y;```","Azure Virtual Desktop session host is not configured with managed identity This policy identifies Virtual Desktop session hosts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Virtual Desktop session hosts. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 22 or fromPort == 22) or (toPort > 22 and fromPort < 22)))] exists)```,"AWS Security Group allows all traffic on SSH port (22) This policy identifies Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 (or range containing 22)." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/securitySolutions/delete"" as X; count(X) less than 1```","Azure Activity log alert for Delete security solution does not exist This policy identifies the Azure accounts in which activity log alert for Delete security solution does not exist. Creating an activity log alert for Delete security solution gives insight into changes to the active security solutions and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Security Solutions (Microsoft.Security/securitySolutions)' and Other fields you can set based on your custom settings.\n6. Click on Create." ```config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-cloudtrail-get-trail-status' as Y; filter '$.X.name equals $.Y.trail and $.Y.status.isLogging is false'; show X;```,"AWS CloudTrail logging is disabled This policy identifies the CloudTrails in which logging is disabled. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on logging for CloudTrail across different regions to get a complete audit trail of activities across various services. NOTE: This policy will be triggered only when you have CloudTrail configured in your AWS account and logging is disabled. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudTrail dashboard\n3. Click on 'Trails' (Left panel)\n4. Click on reported CloudTrail\n5. Enable 'Logging' by hovering logging button to 'ON'\nOR\nIf CLoudTrail is not required you can delete by clicking on the delete icon below the logging hover button.." ```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-account-summary' AND json.rule='not AccountAccessKeysPresent equals 0'```,"AWS Access key enabled on root account This policy identifies root accounts for which access keys are enabled. Access keys are used to sign API requests to AWS. Root accounts have complete access to all your AWS services. If the access key for a root account is compromised, an unauthorized users will have complete access to your AWS account. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console as the root user.\n2. Click root account name and on the top right select 'Security Credentials' from the dropdown.\n3. For each key in 'Access Keys', click on ""X"" to delete the keys.." ```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case openshift and state equal ignore case normal and features.pullSecretApplied is false```,"IBM Cloud OpenShift cluster has Image pull secrets disabled This policy identifies IBM Cloud OpenShift Clusters with image pull secrets disabled. If Image pull secrets feature Is disabled, it stores registry credentials to connect to container registry. It is recommended to enable image pull secrets feature, which will store an image pull secret for pulling images rather than using credentials. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To enable image pull secrets feature on a OpenShift cluster, refer \nfollowing URLs:\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-registry#imagePullSecret_migrate_api_key\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-registry#update-pull-secret." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and (settings.databaseFlags[*].name does not contain ""external scripts enabled"" or settings.databaseFlags[?any(name contains ""external scripts enabled"" and value contains on)] exists)'```","GCP SQL server instance database flag external scripts enabled is not set to off This policy identifies GCP SQL server instances for which database flag 'external scripts enabled' is not set to off. Feature 'external scripts enabled' enables the execution of scripts with certain remote language extensions. When Advanced Analytics Services is installed, setup can optionally set this property to true. As the External Scripts Enabled feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system. It is recommended to set external scripts enabled database flag for Cloud SQL SQL Server instance to off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag 'external scripts enabled' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Flags and parameters', choose the flag 'external scripts enabled' and set the value as 'off'\n6. Click on DONE\n7. Click on SAVE \n8. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'." "```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(23,23) or destinationPortRanges[*] contains _Port.inRange(23,23) ))] exists```","Azure Network Security Group allows all traffic on Telnet (TCP Port 23) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Telnet (TCP Port 23). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MySQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal\n2. Select 'All services'\n3. Select 'Network security groups', under Networking\n4. Select the Network security group you need to modify\n5. Select 'Inbound security rules' under Settings\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals Containers and properties.pricingTier does not equal Standard)] exists```,"Azure Microsoft Defender for Cloud set to Off for Containers This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Containers set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. It is highly recommended to enable Azure Defender for Containers. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Containers' Select 'On' under Plan.\n8. Select 'Save'." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case ""Running"" AND kind contains ""functionapp"" AND kind does not contain ""workflowapp"" AND kind does not equal ""app"" AND (identity.type does not exist or identity.principalId is empty)```","Azure Function App doesn't have a Managed Service Identity This policy identifies Azure Function App which doesn't have a Managed Service Identity. Managed service identity in Function App makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need of username and passwords. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System-assigned' or 'User-assigned' managed identity based on your requirement.\n6. Click on 'Save'." "```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = (metadataOptions.httpEndpoint does not exist) or (metadataOptions.httpEndpoint equals ""enabled"" and metadataOptions.httpTokens equals ""optional"") as X; config from cloud.resource where api.name = 'aws-describe-auto-scaling-groups' as Y; filter ' $.X.launchConfigurationName equal ignore case $.Y.launchConfigurationName'; show X;```","AWS Auto Scaling group launch configuration not configured with Instance Metadata Service v2 (IMDSv2) This policy identifies the autoscaling group launch configuration where IMDSv2 is set to optional. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. With IMDSv2, every request is now protected by session authentication. Version 2 of the IMDS adds new protections that weren't available in IMDSv1 to further safeguard your EC2 instances created by the autoscaling group. It is recommended to use only IMDSv2 for all your EC2 instances. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot modify a launch configuration after you create it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration with IMDSv2 enabled.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the 'Advanced details', go to the 'Metadata version' section.\n6. Select 'V2 only (token required)' option.\n7. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n8. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n9. Select the check box next to the Auto Scaling group.\n10. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n11. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n12. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n13. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances,\n\n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from the following URL: \nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration.\n\nNOTE: Ensure adequate precautions before you enforce the use of IMDSv2, as applications or agents that use IMDSv1 for instance metadata access will break.." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] contains microsoft-user-default-legacy```,"gvCopy of Azure AD Users can consent to apps accessing company data on their behalf is enabled This policy identifies Azure Active Directory which have 'Users can consent to apps accessing company data on their behalf' configuration enabled. User profiles contain private information which could be shared with others without requiring any further consent from the user if this configuration is enabled. It is recommended not to allow users to use their identity outside of the cloud environment. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Active Directory'\n3. Select 'Users' under 'Manage'\n4. Go to 'User settings'\n5. Click on 'Manage how end users launch and view their applications' if not selected\n6. Under 'Enterprise applications' select 'No' for 'Users can consent to apps accessing company data on their behalf'\n7. Click on 'Save'." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = name does not start with ""gke-"" and status equals RUNNING and (networkInterfaces[*].accessConfigs exists or networkInterfaces.ipv6AccessConfigs exists)```","GCP VM instance with the external IP address This policy identifies GCP VM instances that are assigned a public IP. Using a public IP with a GCP VM exposes it directly to the internet, increasing the risk of unauthorized access and attacks. This makes the VM vulnerable to threats such as brute force attempts, DDoS attacks, and other malicious activities. To mitigate these risks, it is safer to use private IPs and secure access methods like VPNs or load balancers. It is recommended to avoid assigning public IPs to VM instances. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Compute Engine' and then 'VM instances'\n3. Click on the reported VM instance\n4. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue\n5. Once the the VM has been stopped, click on the 'EDIT' button\n6. Under 'Network interfaces', expand the network interface with the public external IP assigned\n7. Select 'IPv4 (single-stack)' under IP stack type\n8. Select 'None' under 'External IPv4 address'\n9. Click on 'Save'\n10. Click on 'START/RESUME' from the top menu." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and vpcoptions.vpcid does not exist```,"AWS Elasticsearch domain publicly accessible This policy identifies Elasticsearch domains which are publicly accessible. Enabling VPCs for Elasticsearch domains provides flexibility and control over the clusters access with an extra layer of security than Elasticsearch domains that use public endpoints. It also keeps all traffic between your VPC and Elasticsearch domains within the AWS network instead of going over the public Internet. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: VPC for AWS Elasticsearch domain can be set only at the time of the creation of domain. So to resolve this alert, create a new domain with VPC, then migrate all required Elasticsearch domain data from the reported Elasticsearch domain to this newly created domain and delete reported Elasticsearch domain.\n\nTo set up the new ES domain with VPC, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html\nTo create Elasticsearch domain within VPC, In Network configuration choose VPC access instead of Public access.\n\nTo delete reported ES domain, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-deleting.html." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-snapshots' AND json.rule = 'snapshot.state equals completed and createVolumePermissions[*].userId size != 0 and _AWSCloudAccount.isRedLockMonitored($.createVolumePermissions[*].userId) is false'```,"AWS EBS Snapshot with access for unmonitored cloud accounts This policy identifies EBS Snapshot with access for unmonitored cloud accounts.The EBS Snapshots which have either the read / write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read / write privileges should be reviewed and confirmed that these are valid accounts of your organisation (or authorised by your organisation) and are not active under Prisma Cloud monitoring. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Access the EC2 service, navigate to 'Snapshots' under 'Elastic Block Store' in left hand menu.\n4. Select the identified 'EBS Snapshot' and select the tab 'Permissions'.\n5. Review and delete the AWS Accounts which should not have read access.." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = 'cannedACL equals PublicRead or cannedACL equals PublicReadWrite'```,"Alibaba Cloud OSS bucket accessible to public This policy identifies Object Storage Service (OSS) buckets which are publicly accessible. Alibaba Cloud OSS allows customers to store and retrieve any type of content from anywhere on the web. Often, customers have legitimate reasons to expose the OSS bucket to the public, for example, to host website content. However, these buckets often contain highly sensitive enterprise data which if left open to the public may result in sensitive data leaks. This is applicable to alibaba_cloud cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Access Control List (ACL)' Section, Click on 'Configure'\n5. For 'Bucket ACL' field, Choose 'Private' option\n6. Click on 'Save'." ```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = status equals Running and instanceChargeType equals PostPaid and deletionProtection is false```,"Alibaba Cloud ECS instance release protection is disabled This policy identifies ECS instances for which release protection is disabled. Enabling release protection for these ECS instances prevents irreversible data loss resulting from accidental or malicious operations. Note: This attribute applies to Pay-As-You-Go instances only. Release protection can only restrict the manual release operation and does not apply for release operation done by Alibaba Cloud. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Select the reported ECS instance, select More -> Instance Settings -> Change Release Protection Setting -> Release Protection (Toggle to enable)\n5. Click on 'OK'." ```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action contains iam:CreatePolicyVersion or Action contains iam:SetDefaultPolicyVersion or Action contains iam:PassRole or Action contains iam:CreateAccessKey or Action contains iam:CreateLoginProfile or Action contains iam:UpdateLoginProfile or Action contains iam:AttachUserPolicy or Action contains iam:AttachGroupPolicy or Action contains iam:AttachRolePolicy or Action contains iam:PutUserPolicy or Action contains iam:PutGroupPolicy or Action contains iam:PutRolePolicy or Action contains iam:AddUserToGroup or Action contains iam:UpdateAssumeRolePolicy or Action contains iam:*))] exists```,"aws-test-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' AND json.rule = kmsKeyId is member of (""null"")```","OCI Block Storage Block Volumes are not encrypted with a Customer Managed Key (CMK) This policy identifies the OCI Block Storage Volumes that are not encrypted with a Customer Managed Key (CMK). It is recommended that Block Storage Volumes should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the Block Storage Volume. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign." "```config from cloud.resource where api.name = 'gcloud-essential-contacts-organization-contact' AND json.rule = notificationCategorySubscriptions[] contains ""ALL"" or (notificationCategorySubscriptions[] contains ""LEGAL"" and notificationCategorySubscriptions[] contains ""SECURITY"" and notificationCategorySubscriptions[] contains ""SUSPENSION"" and notificationCategorySubscriptions[] contains ""TECHNICAL"" and notificationCategorySubscriptions[] contains ""TECHNICAL_INCIDENTS"") as X; count(X) less than 1```","GCP Organization not configured with essential contacts This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." "```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = ""location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/delete"" as X; count(X) less than 1```","chao test change saved search This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createidpgroupmapping and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteidpgroupmapping and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateidpgroupmapping) and actions.actions[*].topicId exists' as X; count(X) less than 1```,"OCI Event Rule and Notification does not exist for Identity Provider Group (IdP) group mapping changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Identity Provider Group Mappings (IdP) changes. Monitoring and alerting on changes to IdP group mapping will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity Provider Group Mappings (IdP). NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Idp Group Mapping – Create, Idp Group Mapping – Delete and Idp Group Mapping – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule." ```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = securityRules[?any( direction equals INGRESS and (isStateless does not exist or isStateless is false) )] exists```,"OCI Network Security Groups (NSG) has stateful security rules This policy identifies the OCI Network Security Groups (NSG) security rules that have stateful ingress rules configured. It is recommended that Network Security Groups (NSG) security rules are configured with stateless ingress rules to slow the impact of a denial-of-service (DoS) attack. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Select the security rule from Security rules pane where Stateless is set to No and Direction set to Ingress\n5. Click on Edit\n6. Select the checkbox STATELESS\n7. Click on Save Changes." ```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```,"build information This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```,"dnd_test_create_hyperion_policy_attack_path_policy_as_child_policies_ss_finding_1 Description-49e9b494-9bab-4e02-ad26-c6ac7731d570 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A." ```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = default is true and shared is false and state equal ignore case available as X; config from cloud.resource where api.name = 'aws-ec2-describe-network-interfaces' AND json.rule = status equal ignore case in-use as Y; filter '$.X.vpcId equals $.Y.vpcId'; show X;```,"AWS Default VPC is being used This policy identifies AWS Default VPCs that are being used. AWS creates a default VPC automatically upon the creation of your AWS account with a default security group and network access control list (NACL). Using AWS default VPC can lead to limited customization and security concerns due to shared resources and potential misconfigurations, hindering scalability and optimal resource management. As a best practice, using a custom VPC with specific security and network configuration provides greater flexibility and control over your architecture. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to remove association with the default VPC and create a new custom VPC configuration based on your security and networking requirements, and associate the resource back to a newly created custom VPC.\n\nTo create a new VPC, follow below URL:\nhttps://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html\n\nTo remove the default VPC, follow below URL:\nhttps://docs.aws.amazon.com/vpc/latest/userguide/delete-vpc.html\n\nNOTE: Before any modification identify and analyze the potential results of a change in the environment.." "```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = ""databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_min_duration_statement')] does not exist or settings.databaseFlags[?(@.name=='log_min_duration_statement')].value does not equal -1)""```","GCP PostgreSQL instance database flag log_min_duration_statement is not set to -1 This policy identifies PostgreSQL database instances in which database flag log_min_duration_statement is not set to -1. The log_min_duration_statement flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Logging SQL statements may include sensitive information that should not be recorded in logs. So it is recommended to set log_min_duration_statement flag value to -1 so that execution statements logging will be disabled. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_min_duration_statement' from the drop-down menu and set the value as '-1'\nOR\nIf the flag has been set to other than -1, Under 'Configuration options', In 'Flags' section choose the flag 'log_min_duration_statement' and set the value as '-1'\n6. Click Save." ```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = 'dnssecConfig.defaultKeySpecs[*].keyType contains zoneSigning and dnssecConfig.defaultKeySpecs[*].algorithm contains rsasha1'```,"GCP Cloud DNS zones using RSASHA1 algorithm for DNSSEC zone-signing This policy identifies the GCP Cloud DNS zones which are using the RSASHA1 algorithm for DNSSEC zone-signing. DNSSEC is a feature of the Domain Name System that authenticates responses to domain name lookups and also prevents attackers from manipulating or poisoning the responses to DNS requests. So the algorithm used for key signing should be recommended one and it should not be weak. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently, DNSSEC zone-signing can be updated using command line interface only.\n1. If you need to change the settings for a managed zone where it has been enabled, you have to turn DNSSEC off and then re-enable it with different settings. To turn off DNSSEC, run following command:\ngcloud dns managed-zones update --dnssec-state off\n2. To update zone-signing for a reported managed DNS Zone, run following command:\ngcloud dns managed-zones update --dnssec-state on --ksk-algorithm --ksk-key-length --zsk-algorithm --zsk-key-length --denial-of-existence ." ```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration exists) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode contains CSE) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode does not contain Custom)' ; show X;```,"AWS EMR cluster is not configured with CSE CMK for data at rest encryption (Amazon S3 with EMRFS) This policy identifies EMR clusters which are not configured with Client Side Encryption with Customer Master Keys(CSE CMK) for data at rest encryption of Amazon S3 with EMRFS. As a best practice, use Customer Master Keys (CMK) to encrypt the data in your EMR cluster and ensure full control over your data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. For encryption At Rest click the checkbox for 'Enable at-rest encryption for EMRFS data in Amazon S3'.\n8. From the dropdown 'Default encryption mode’ select 'CSE-Custom'. Follow below link for configuration steps.\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button\n10. On the left menu of EMR dashboard Click 'Clusters'\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.."