query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = "(((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and publicAccessBlockConfiguration.ignorePublicAcls is false) or (policyStatus.isPublic is true and publicAccessBlockConfiguration.restrictPublicBuckets is false)) and websiteConfiguration does not exist) and ((policy.Statement[*].Condition.Bool.aws:SecureTransport does not exist) or ((policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action contains s3: or policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action[*] contains s3:) and (policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE)))"```
AWS S3 bucket not configured with secure data transport policy This policy identifies S3 buckets which are not configured with secure data transport policy. AWS S3 buckets should enforce encryption of data over the network using Secure Sockets Layer (SSL). It is recommended to add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:*) from anybody who browses (Principal: *) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. Navigate to Amazon S3 Dashboard\n3. Click on 'Buckets' (Left Panel)\n4. Choose the reported S3 bucket\n5. On 'Permissions' tab, Click on 'Bucket Policy'\n6. Add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:) from anybody who browses (Principal: ) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). Below is the sample policy:\n{\n "Sid": "ForceSSLOnlyAccess",\n "Effect": "Deny",\n "Principal": "*",\n "Action": "s3:GetObject",\n "Resource": "arn:aws:s3:::bucket_name/*",\n "Condition": {\n "Bool": {\n "aws:SecureTransport": "false"\n }\n }\n}.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action contains *)] exists```
AWS IAM policy is overly permissive to all traffic via condition clause This policy identifies IAM policies that have a policy that is overly permissive to all traffic via condition clause. If any IAM policy statement with a condition containing 0.0.0.0/0 or ::/0, it allows all traffic to resources attached to that IAM policy. It is highly recommended to have the least privileged IAM policy to protect the data leakage and unauthorized access. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the IAM dashboard\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the alert is generated and click on it.\n5. Under the Permissions tab, click on Edit policy\n6. Under the Visual editor, click to expand and perform following;\na. Click to expand 'Request conditions'\nb. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = '(dnssecConfig.state does not exist or dnssecConfig.state equals off) and visibility equals public'```
GCP Cloud DNS has DNSSEC disabled This policy identifies GCP Cloud DNS which has DNSSEC disabled. Domain Name System Security Extensions (DNSSEC) adds security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated. Attackers can hijack the process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to fake websites. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP portal \n2. Go to Network services\n3. Choose Cloud DNS\n4. Click on reported Cloud DNS / Zone name\n5. Under 'DNSSEC' column choose 'On' from the drop-down.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (logPublishingOptions does not exist or logPublishingOptions.INDEX_SLOW_LOGS.enabled is false or logPublishingOptions.INDEX_SLOW_LOGS.cloudWatchLogsLogGroupArn is empty)'```
AWS Elasticsearch domain has Index slow logs set to disabled This policy identifies Elasticsearch domains for which Index slow logs is disabled in your AWS account. Enabling support for publishing indexing slow logs to AWS CloudWatch Logs enables you have full insight into the performance of indexing operations performed on your Elasticsearch clusters. This will help you in identifying performance issues caused by specific queries or due to changes in cluster usage, so that you can optimize your index configuration to address the problem. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Select the 'Logs' tab\n6. In 'Set up Index slow logs' section,\n a. click on 'Setup'\n b. In 'Select CloudWatch Logs log group' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n c. In 'Specify CloudWatch access policy', Create new/Select an existing policy as per your requirement\n d. Click on 'Enable'\n\nThe Index slow logs setting 'Status' should change now to 'Enabled'..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals username_password and (expiration_date does not exist or (_DateTime.ageInDays(expiration_date) > -1))'```
IBM Cloud Secrets Manager has expired user credentials This policy identifies IBM Cloud Secrets Manager user credential which is expired. User credentials should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all user credentials are set with expiration date and expired secrets should be regularly rotated. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: If the IBM Cloud Secrets Manager user credentials secret is expired, secret needs to be deleted.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-delete-secrets&interface=ui#delete-secret-ui\n\nIf the IBM Cloud Secrets Manager user credentials is about to be expired, secret has to be rotated.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-manual-rotation&interface=ui#manual-rotate-user-credentials-ui\n\nPlease make sure to set an expiration date for each secret. Please follow the below steps to set an expiration date:\n1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list', from the list of resources select secret manager instance in which reported secret resides, under security section.\n3. Select the secret.\n4. Under 'Expiration date' section, provide expiration date as required.\n5. Click on 'Update'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals "0.0.0.0" and endIpAddress equals "255.255.255.255")] exists```
Azure SQL Servers Firewall rule allow access to all IPV4 address This policy identifies Azure SQL Servers which has Firewall rule that allow access to all IPV4 address. Having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 would allow access to SQL server from any host on the internet. It is highly recommended not to use this type of firewall rule in any SQL servers. This is applicable to azure cloud and is considered a critical severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n4. Click on the reported SQL server\n5. Click on 'Networking' under Security\n6. In 'Public access' tab, Under Firewall rules, Delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255\n7. Click on 'Save'.
```config from cloud.resource where api.name = 'gcp-compute-disk-list' AND json.rule = status equals READY and name does not start with "gke-" and diskEncryptionKey.sha256 does not exist```
GCP VM disks not encrypted with Customer-Supplied Encryption Keys (CSEK) This policy identifies VM disks which are not encrypted with Customer-Supplied Encryption Keys (CSEK). If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. It is recommended to use VM disks encrypted with CSEK for business-critical VM instances. Limitation: This policy might give false negatives in case VM disks are created with name prefix 'gke-'. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Currently, we can not update the encryption of an existing disk. So to fix this alert, Create a new VM disk with Encryption set to Customer supplied, migrate all required data from reported VM disk to newly created disk and delete the reported VM disk.\n\n1. Login to GCP Portal\n2. Go to Compute Engine\n3. Go to Disks\n4. Click on Create a disk\n5. Specify other disk parameters as you desire\n6. Set Encryption to Customer-supplied key\n7. Provide the Key in the box\n8. Select Wrapped key\n9. Click on Create.
```config from cloud.resource where api.name = 'oci-database-autonomous-database' AND json.rule = lifecycleState equal ignore case AVAILABLE and dataSafeStatus does not equal ignore case REGISTERED```
OCI Autonomous Database not registered in Data Safe This policy identifies Oracle Autonomous Databases that are not registered in Oracle Data Safe. Oracle Data Safe is a fully-integrated cloud service that focuses on the security of your data, providing comprehensive features for protecting sensitive and regulated information in Oracle databases. Through the Security Center, you can access functionalities such as user and security assessments, data discovery, data masking, activity auditing, and alerts. As best practice, it is recommended to register the Autonomous Database in Data Safe. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Autonomous Database with datasafe, refer to the following documentation:\nhttps://docs.oracle.com/en/cloud/paas/data-safe/admds/register-autonomous-database.html#GUID-19A85842-A81C-4F40-A1EE-13C40EA845F0\nor\nhttps://docs.oracle.com/en-us/iaas/tools/oci-cli/3.43.2/oci_cli_docs/cmdref/db/autonomous-database/data-safe/register.html.
```config from cloud.resource where api.name = 'aws-iam-list-groups' as X; config from cloud.resource where api.name = 'aws-iam-list-users' as Y; filter ' not ($.Y.groupList[*] intersects $.X.groupName)'; show X;```
AWS IAM group not in use This policy identifies AWS IAM groups that are not actively in use. An AWS IAM group is a collection of IAM users managed together, allowing for unified permission assignment. These groups, if not assigned any users, pose a potential security risk if left unmanaged and can inadvertently grant unauthorized access to AWS services and resources. It is recommended to review and remove any unused IAM groups to prevent attaching unauthorized IAM users. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete an IAM user group (console)\n\n1. Sign in to the AWS Management Console\n2. Navigate to the 'Services' menu and, within the 'Security, Identity, & Compliance' category, choose the 'IAM' service to open the IAM console\n3. In the IAM console's navigation pane, select 'User groups' located under the 'Access management' section\n4. In the list of user groups, select the check box next to the name of the reported user group to delete. You can use the search box to filter the list of user group name.\n5. Choose 'Delete' to delete the group\n6. In the confirmation box, If you want to delete user groups, type 'delete' and choose 'Delete'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = '($.acl[*].email exists and $.acl[*].email contains logging) and ($.acl[*].entity contains allUsers or $.acl[*].entity contains allAuthenticatedUsers)'```
GCP Storage Buckets with publicly accessible GCP logs Checks to ensure that Stackdriver logs on Storage Buckets are not Giving public access to Stackdriver logs will enable anyone with a web association to retrieve sensitive information that is critical to business. Stackdriver Logging enables to store, search, investigate, monitor and alert on log information/events from Google Cloud Platform. The permission needs to be set only for authorized users. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set an ACL, please refer to the URL given below. Make sure that no ACL is set to allow 'allUsers' or 'allAuthenticatedUsers' for the reported bucket.\nhttps://cloud.google.com/storage/docs/access-control/create-manage-lists#set-an-acl.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.systemConfigurationsMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'```
Azure Microsoft Defender for Cloud security configurations monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have security configurations monitoring set to disabled. Security configurations will enable the daily analysis of operating system configurations. The rules for hardening the operating system like firewall rules, password and audit policies are reviewed. Recommendations are made for setting the right level of security controls. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Vulnerabilities in security configuration on your machines should be remediated' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'.
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = clientToken is not empty AND monitoring.state contains "running"```
Venu Test This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-fsx-file-system' AND json.rule = FileSystemType equals "OPENZFS" and Lifecycle equals "AVAILABLE" and (OpenZFSConfiguration.CopyTagsToBackups is false or OpenZFSConfiguration.CopyTagsToVolumes is false )```
AWS FSx for OpenZFS file systems not configured to copy tags to backups or volumes This policy identifies the AWS FSx for OpenZFS file system is configured to copy tags to backups or volumes. AWS FSx for OpenZFS is a managed service for deploying and scaling OpenZFS file systems on AWS. Tags make resource identification and management easier, ensuring consistent security policies across file systems. Without copying tags to backups and volumes in AWS FSx for OpenZFS, enforcing consistent access control and tracking sensitive data in these resources becomes challenging. It is recommended to configure an FSx for the OpenZFS file system to copy tags to backups and volumes. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure an AWS FSx for OpenZFS file system to copy tags to backups and volumes, perform following actions:\n\n1. Sign in to your AWS account and Open the Amazon FSx console.\n2. In the left navigation pane, choose 'File systems', and then choose the FSx for OpenZFS file system that is reported.\n3. For 'Actions', choose 'Update tags preferences'. The Update tags preferences dialog box displays.\n4. For 'Copy tags to backups', select 'Enabled' to copy tags from the file system to any backup thats taken.\n5. For 'Copy tags to volumes', select 'Enabled' to copy tags from the file system to any volume that you create.\n6. Choose Update to update the file system with your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(53,53) or destinationPortRanges[*] contains _Port.inRange(53,53) ))] exists```
Azure Network Security Group allows all traffic on NetBIOS DNS (UDP Port 53) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on DNS UDP port 53. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = mfa equal ignore case "NONE"```
IBM Cloud Multi-Factor Authentication (MFA) not enabled at the account level This policy identifies IBM Cloud accounts where Multi-Factor Authentication (MFA) is not enabled at the account level. MFA adds an extra layer of protection on top of your user name and password and helps protect accounts from stolen, phished, or weak password exploits. Enabling IBM MFA at the account level is the recommended approach to protect users. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable IBM MFA:\n\nhttps://cloud.ibm.com/docs/account?topic=account-enablemfa#enabling.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-oracledatabase-bmvm-dbsystem' AND json.rule = 'lifecycleState equals AVAILABLE and nsgIds contains null'```
OCI Database system is not configured with Network Security Groups This policy identifies Oracle Cloud Infrastructure (OCI) Database Systems that are not configured with Network Security Groups (NSGs). Network Security Groups provide granular security controls at the instance level, allowing for more precise management of inbound and outbound traffic to database systems. It is recommended to configure database systems with NSGs to enhance their security thereby mitigating the risk of unauthorized access and potential data breaches. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To manage Network Security Groups for a DB System, follow below URL:\nhttps://docs.oracle.com/en-us/iaas/base-database/doc/manage-network-security-groups-db-system.html\n\nNOTE: Before you update DB Systems with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'```
Bobby Copy of AWS Access logging not enabled on S3 buckets Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option..
```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-compute-instances-list' AND json.rule = (networkInterfaces[*].accessConfigs[*].type exists and networkInterfaces[*].accessConfigs[*].type contains "ONE_TO_ONE_NAT") and (labels.goog-composer-environment does not exist and tags.items[*] does not contain "dataflow") and (metadata.items[*].key does not equal "nat" and metadata.items[*].value does not equal "TRUE") and (name does not contain "paloALTO")```
CNA customer FASDFDSAF This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = state_description equal ignore case active and secret_type is member of (private_cert, public_cert) and rotation.auto_rotate is false```
IBM Cloud Secrets Manager certificate not configured with automatic rotation This policy identifies IBM Cloud Secrets Manager certificates that are not configured with automatic rotation. IBM Cloud Secrets Manager allows you to manage various types of certificates, including those from imported third-party certificate authorities, public certificates, and private certificates, providing a centralised platform for secure certificate storage and management. Securely storing and timely rotating certificates before expiration is crucial for maintaining a high security posture and avoiding any service disruptions. It is recommended to set IBM Cloud Secrets Manager certificates with auto-rotation. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a rotation policy for a certificate, follow the below steps:\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list', From the list of resources, select the secret manager instance in which the reported secret resides under the security section.\n3. Select the secret.\n4. Under the 'Rotation' tab, enable 'Automatic secret rotation'.\n5. Set 'Rotation Interval' according to the requirements.\n6. Click on 'Update'.\n\nNote: Imported certificates cannot be set with an automatic rotation policy; they have to be re-imported before expiration..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.ipAllowlist does not exist or properties.ipAllowlist is empty)```
Azure Machine learning workspace configured with overly permissive network access This policy identifies Machine learning workspaces configured with overly permissive network access. Overly permissive public network access allows access to resource through the internet using a public IP address. It is recommended to restrict IP ranges to allow access to your workspace and endpoint from specific public internet IP address ranges and is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict internet IP ranges on your existing Machine learning workspace, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link?view=azureml-api-2&tabs=azure-portal#enable-public-access-only-from-internet-ip-ranges-preview.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(21,21) or destinationPortRanges[*] contains _Port.inRange(21,21) ))] exists```
Azure Network Security Group allows all traffic on FTP (TCP Port 21) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on FTP (TCP Port 21). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict FTP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where api.name = 'aws-waf-classic-web-acl-resource' AND json.rule = resources.apiGateway[*] exists or resources.applicationLoadBalancer[*] exists```
AWS WAF Classic (Regional) in use This policy identifies AWS Classic WAF which is in use. As a best practice, create the AWS WAFv2 and configure accordingly to protect against application-layer attacks. The block criteria in the WAFv2 web access control list (web ACL) has more capabilities than the Classic WAF to filter-out malicious traffic. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To migrate a web ACL from AWS WAF Classic to AWS WAF, follow below URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/waf-migrating-procedure.html.
```config from cloud.resource where api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals "0.0.0.0/0" and direction equals "inbound" and ( protocol equals "all" or ( protocol equals "tcp" and (( port_max greater than 22 and port_min less than 22 ) or ( port_max equals 22 and port_min equals 22 )))))] exists as X; config from cloud.resource where api.name = 'ibm-vpc' as Y; filter ' $.X.id equals $.Y.default_security_group.id '; show X;```
IBM Cloud Default Security Group allow all traffic on SSH port (22) This policy identifies IBM Cloud Default Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. A VPC comes with a default security group whose initial configuration allows access from all members that are attached to this security group. If you do not specify a security group when you launch a Virtual Server, the Virtual Server is automatically assigned to this default security group. As a result, the Virtual Server will be having risk of uncontrolled connectivity. It is recommended that the Default Security Group allows network ports, protocols, and services listening on a system with validated business needs that are running on each system. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 22 (or range containing 22)\n6. Click on 'Delete'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = state contains running and metadataOptions.httpEndpoint equals enabled and metadataOptions.httpTokens does not contain required```
AWS EC2 instance not configured with Instance Metadata Service v2 (IMDSv2) This policy identifies AWS instances that are not configured with Instance Metadata Service v2 (IMDSv2). With IMDSv2, every request is now protected by session authentication. IMDSv2 protects against misconfigured-open website application firewalls, misconfigured-open reverse proxies, unpatched SSRF vulnerabilities, and misconfigured-open layer-3 firewalls and network address translation. It is recommended to use only IMDSv2 for all your EC2 instances. For more details:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from follwoing URL\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html\n\nNOTE: Make a precaution before you enforce the use of IMDSv2, as applications or agents that use IMDSv1 for instance metadata access will break..
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains CreateCustomerGateway and $.X.filterPattern contains DeleteCustomerGateway and $.X.filterPattern contains AttachInternetGateway and $.X.filterPattern contains CreateInternetGateway and $.X.filterPattern contains DeleteInternetGateway and $.X.filterPattern contains DetachInternetGateway) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for Network gateways changes This policy identifies the AWS regions which do not have a log metric filter and alarm for Network gateways changes. Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path. It is recommended that a metric filter and alarm be established for changes to network gateways. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS equals * or Principal equals *) and Condition does not exist)] exists```
AWS SNS topic is exposed to unauthorized access This policy identifies AWS SNS topics that are exposed to unauthorized access. Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. To protect these messages from attackers and unauthorized accesses, permissions should be given to only authorized users. For more details: https://docs.aws.amazon.com/sns/latest/dg/sns-security-best-practices.html#ensure-topics-not-publicly-accessible This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. Add the restrictive 'Condition' statement to the JSON editor to specify who can access the topic. OR Make 'Principal' restrictive so that only limited resources allowed.\n9. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dataproc-clusters-list' AND json.rule = config.encryptionConfig.gcePdKmsKeyName does not exist and config.encryptionConfig.kmsKey does not exist```
GCP Dataproc Cluster not configured with Customer-Managed Encryption Key (CMEK) This policy identifies Dataproc Clusters that are not configured with CMEK. Dataproc cluster and job data are stored on persistent disks associated with the Compute Engine VMs in the cluster as well as in a Cloud Storage staging bucket. As a security best practice use of CMEK to encrypt this data on persistent disk and bucket is advisable and provides more control to the user. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Currently, it is not possible to update the encryption key for a GCP Dataproc Cluster. It is recommended to create a new cluster with appropriate CMEK and migrate all workloads from the old cluster to the new cluster.\n\nTo configure encryption key for GCP Dataproc Cluster while creation, please refer to the URL given below:\nhttps://cloud.google.com/dataproc/docs/concepts/configuring-clusters/customer-managed-encryption#use_cmek_with_cluster_data.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cognito-identity-pool' AND json.rule = allowUnauthenticatedIdentities is true```
Copy of AWS Cognito identity pool allows unauthenticated guest access This policy identifies AWS Cognito identity pools that allow unauthenticated guest access. AWS Cognito identity pools unauthenticated guest access and allows unauthenticated users to assume a role in your AWS account. These unauthenticated users will be granted permissions of the assumed role which may have more privileges than that are intended. This could lead to unauthorized access or data leakage. It is recommended to disable unauthenticated guest access for the Cognito identity pools. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To deactivate guest access in an identity pool,\n1. Log in to AWS console\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to the Amazon Cognito dashboard\n4. Under '''Identity pools''' section, select the reported identity pool\n5. In '''User access''' tab, under '''Guest access''' section\n6. Click on '''Deactivate''' button to deactivate the guest access configured.\n\nNOTE: Before you deactivate unauthenticated guest access, it is must to have at-least one authenticated access configured in your identity pool..
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains "resource.type =" or $.X.filter contains "resource.type=" ) and ( $.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=" ) and $.X.filter contains "gce_route" and ( $.X.filter contains "protoPayload.methodName:" or $.X.filter contains "protoPayload.methodName :" ) and ( $.X.filter does not contain "protoPayload.methodName!:" and $.X.filter does not contain "protoPayload.methodName !:" ) and $.X.filter contains "compute.routes.delete" and $.X.filter contains "compute.routes.insert"'; show X; count(X) less than 1```
bobby remediation 1 This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: ddddd.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and engine equals postgres and engineVersion is member of ('13.2','13.1','12.6','12.5','12.4','12.3','12.2','11.11','11.10','11.9','11.8','11.7','11.6','11.5','11.4','11.3','11.2','11.1','10.16','10.15','10.14','10.13','10.12','10.11','10.10','10.9','10.7','10.6','10.5','10.4','10.3','10.1','9.6.21','9.6.20','9.6.19','9.6.18','9.6.17','9.6.16','9.6.15','9.6.14','9.6.12','9.6.11','9.6.10','9.6.9','9.6.8','9.6.6','9.6.5','9.6.3','9.6.2','9.6.1','9.5','9.4','9.3')```
AWS RDS PostgreSQL exposed to local file read vulnerability This policy identifies AWS RDS PostgreSQL which are exposed to local file read vulnerability. AWS RDS PostgreSQL installed with vulnerable 'log_fdw' extension is exposed to local file read vulnerability, due to which attacker could gain access to local system files of the database instance within their account, including a file which contained credentials specific to PostgreSQL. It is highly recommended to upgrade AWS RDS PostgreSQL to the latest version. For more information, https://aws.amazon.com/security/security-bulletins/AWS-2022-004/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Amazon has deprecated affected versions of RDS for PostgreSQL and customers can no longer create new instances with the affected versions.\n\nTo upgrade the latest version of Amazon RDS for PostgreSQL, please follow below URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html\n.
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING and $.X.status.state does not contain TERMINATED and $.X.status.state does not contain TERMINATED_WITH_ERRORS) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration does not exist)' ; show X;```
AWS EMR cluster is not enabled with local disk encryption This policy identifies AWS EMR clusters that are not enabled with local disk encryption. Applications using the local file system on each cluster instance for intermediate data throughout workloads, where data could be spilled to disk when it overflows memory. With Local disk encryption at place, data at rest can be protected. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Local disk encryption', check the box 'Enable at-rest encryption for local disks'.\n8. Select the appropriate Key provider type from the 'Key provider type' dropdown list.\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once the new cluster is set up verify its working and terminate the source cluster.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-users' AND json.rule = groupList is empty```
AWS IAM user is not a member of any IAM group This policy identifies an AWS IAM user as not being a member of any IAM group. It is generally a best practice to assign IAM users to at least one IAM group. If the IAM users are not in a group, it complicates permission management and auditing, increasing the risk of privilege mismanagement and security oversights. It also leads to higher operational overhead and potential non-compliance with security best practices. It is recommended to ensure all IAM users are part of at least one IAM group according to your business requirement to simplify permission management, enforce consistent security policies, and reduce the risk of privilege mismanagement. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To add a user to an IAM user group (console)\n\n1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/\n2. In the navigation pane, choose 'Users' under the 'Access management' section and then choose the name of the user that is reported\n3. Choose the 'Groups' tab and then choose 'Add user to groups'. \n4. Select the check box next to the groups under 'Group Name' according to your requirements.\n5. Choose 'Add user to group(s)'..
```config from cloud.resource where cloud.type = 'alibaba_cloud' and api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = bucket.logging.targetBucket does not exist```
Alibaba Cloud OSS bucket logging not enabled This policy identifies Alibaba Cloud Object Storage Service (OSS) buckets that do not have logging enabled. Enabling logging for OSS buckets helps capture access and operation events, which are critical for security monitoring, troubleshooting, and auditing. Without logging, you lack visibility into who accesses and interacts with your bucket, potentially missing unauthorized access or suspicious behaviour. As a security best practice, it is recommended to enable logging for OSS buckets. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Navigate to Object Storage Service\n3. In the bucket-list pane, click on a reported OSS bucket\n4. Under Log, click configure\n5. Configure bucket logging\n6. Click the Enabled checkbox\n7. Select Target Bucket from list\n8. Enter a Target Prefix\n9. Click Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = versioningConfiguration.status equals Enabled and (versioningConfiguration.mfaDeleteEnabled does not exist or versioningConfiguration.mfaDeleteEnabled is false) AND (bucketLifecycleConfiguration does not exist or bucketLifecycleConfiguration.rules[*].status equals Disabled)```
AWS S3 bucket is not configured with MFA Delete This policy identifies the S3 buckets which do not have Multi-Factor Authentication(MFA) enabled to delete S3 object version. Enabling MFA Delete on a versioned bucket adds another layer of protection. In order to permanently delete an object version or suspend or reactivate versioning on the bucket valid code from the account's MFA device required. Note: MFA Delete only works for CLI or API interaction, not in the AWS Management Console. Also, you cannot make version DELETE actions with MFA using IAM user credentials. You must use your root AWS account. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: Using console you can enable versioning on the bucket but you cannot enable MFA delete.\nYou can do it via only with the AWS CLI:\naws s3api put-bucket-versioning --bucket <BUCKET_NAME> --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "<MFA_SERIAL_NUMBER> <MFA_CODE>"\n\nNOTE: The bucket owner, the AWS account that created the bucket (root account), and all authorized IAM users can enable versioning, but only the bucket owner (root account) can enable MFA Delete. Successful execution will enable the S3 bucket versioning and MFA delete on the bucket..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case "normal" and serviceEndpoints.publicServiceEndpointEnabled is true```
IBM Cloud Kubernetes clusters are accessible by using public endpoint This policy identifies IBM Cloud kubernetes clusters which has public service endpoint enabled. If any cluster has public service endpoint enabled, cluster will be accessible on an Internet routable IP address. It is recommended that public service endpoint is disabled and use private service endpoint instead for better security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Kubernetes' and then 'Clusters'\n3. Select the 'Clusters' reported in the alert\n4. Under 'Overview' tab and then 'Networking' section, click the 'Disable' radio button for the public service endpoint.\n6. In the next screen, click 'Disable' to confirm.\n7. In the next screen, click Refresh to initiate an API server refresh.\n5. Click on 'Save'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = 'authTokens[?any(lifecycleState equals ACTIVE and (_DateTime.ageInDays(timeCreated) > 90))] exists'```
OCI users Auth Tokens have aged more than 90 days without being rotated This policy identifies all of your IAM User Auth Tokens which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI Auth Tokens access directly or via SDKs or OCI CLI. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Select Identity & Security from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on Auth Tokens in the lower left-hand corner of the page.\n6. Delete any auth token with a date of 90 days or older under the Created column of the Auth Tokens..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-user' AND json.rule = userType equals Guest as X; config from cloud.resource where api.name = 'azure-role-assignment' AND json.rule = properties.principalType contains User and properties.roleDefinition.properties.roleName is member of ("Owner") as Y; filter '$.X.id equals $.Y.properties.principalId'; show X;```
Custom Azure Guest User with owner permissions This policy identifies Azure Guest users with owner permissions to the subscription. Removing external users with owner permissions to your subscriptions prevents unmonitored and unwanted access to your subscription. It is recommended to remove guest users' owner permissions from the subscription. Refer to below link for more details: https://learn.microsoft.com/en-us/azure/active-directory/enterprise-users/users-restrict-guest-permissions This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To restirct Azure Guest user access follow below URL:\nhttps://learn.microsoft.com/en-us/azure/active-directory/enterprise-users/users-restrict-guest-permissions.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'access_key_1_active is true and access_key_2_active is true'```
AWS IAM user has two active Access Keys This policy identifies IAM users who have two active Access Keys. Each IAM user can have up to two Access Keys, having two Keys instead of one can lead to increased chances of accidental exposure. So it needs to be ensured that unused Access Keys are deleted. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console and navigate to the 'IAM' service.\n2. Click on Users in the navigation pane.\n3. For the identified IAM user which has two active Access Keys, based on policies of your company, take appropriate action.\n4. Create another IAM user with the specific objective performed by the 2nd Access Key.\n5. Delete one of the unused Access Keys..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.uniformBucketLevelAccess.enabled contains false```
Copy of a Copy Maybe GCP cloud storage bucket with uniform bucket-level access disabled This policy identifies GCP storage buckets for which the uniform bucket-level access is disabled. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. log in to GCP Console\n2. Navigate to 'Storage'\n3. Click on 'Browser' to get the list of storage buckets\n4. Search for the alerted bucket and click on the bucket name\n5. From the top menu go to 'PERMISSION' tab\n6. Under the section 'Access control' click on 'SWITCH TO UNIFORM'\n7. On the pop-up window select 'uniform'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'nodePools[*].config.serviceAccount contains default'```
GCP Kubernetes Engine Cluster Nodes have default Service account for Project access This policy identifies Kubernetes Engine Cluster Nodes which have default Service account for Project access. By default, Kubernetes Engine nodes are given the Compute Engine default service account. This account has broad access and more permissions than are required to run your Kubernetes Engine cluster. You should create and use a least privileged service account to run your Kubernetes Engine cluster instead of using the Compute Engine default service account. If you are not creating a separate service account for your nodes, you should limit the scopes of the node service account to reduce the possibility of a privilege escalation in an attack. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Service account can be chosen only at the time of creation of clusters. So to fix this alert, create a new cluster with the least privileged Service account and then migrate all required cluster node data from the reported cluster to this new cluster.\nTo create the cluster with new Service account which has privileges as you needed, perform following steps:\n1. Login to GCP Portal\n2. Click on 'CREATE CLUSTER'\n3. Choose required name/value for cluster fields\n4. Click on 'More'\n5. Choose 'Service account' which has the least privilege under Project access section, Instead of default 'Compute Engine default service account'\nNOTE: The Compute Engine default service account by default, has devstorage.read_only, logging.write, monitoring, service.management.readonly, servicecontrol, and trace.append privileges/scopes.\nYou can configure a service account with more restrictive privileges and assign the same.\n6. Click on 'Create'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.registryId))] exists```
AWS ECR private repository with cross-account access This policy identifies AWS ECR private repository that are configured with cross-account access. An ECR repository is a storage location within Amazon Elastic Container Registry (ECR) where Docker container images are stored and managed. Granting cross-account access to an ECR repository risks unauthorized access and data exposure, requiring strict policy controls and monitoring. It is recommended to implement strict access controls and allow only trusted entities to access to an ECR repository to mitigate security risks. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict the access to AWS ECR private repository policy, perform the following actions:\n \n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'ECR' dashboard from the 'Services' dropdown\n4. In the navigation pane, choose 'Repositories'\n5. On the Repositories page, Select the repository for which the alert is being generated\n6. From the repository image list view, in the navigation pane, choose 'Permissions' from 'Actions' dropdown, and Edit.\n7. On the Edit permissions page, Click on 'Edit policy JSON' to modify the JSON so that Principal is restrictive\n7a. Remove the statements that grant access to actions to other AWS accounts\n or\n 7b. Remove the permitted actions from the statements\n8. After modifications, click on 'Save'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals KeyVaults and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud is set to Off for Key Vault This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Key Vault is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Key Vault. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Key Vault' Select 'On' under Plan.\n8. Select 'Save'.
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains "resource.type =" or $.X.filter contains "resource.type=") and ($.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=") and $.X.filter contains "gce_firewall_rule" and ($.X.filter contains "jsonPayload.event_subtype=" or $.X.filter contains "jsonPayload.event_subtype =") and ($.X.filter does not contain "jsonPayload.event_subtype!=" and $.X.filter does not contain "jsonPayload.event_subtype !=") and $.X.filter contains "compute.firewalls.patch" and $.X.filter contains "compute.firewalls.insert"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for VPC Network Firewall rule changes This policy identifies the GCP accounts which do not have a log metric filter and alert for VPC Network Firewall rule changes. Monitoring for Create or Update firewall rule events gives insight network access changes and may reduce the time it takes to detect suspicious activity. It is recommended to create a metric filter and alarm to detect VPC Network Firewall rule changes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gce_firewall_rule" AND jsonPayload.event_subtype="compute.firewalls.patch" OR jsonPayload.event_subtype="compute.firewalls.insert"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[*] is empty or ipPermissionsEgress[*] is empty as Y; filter '$.X.securityGroups[*] contains $.Y.groupId'; show X;```
AWS Elastic Load Balancer v2 (ELBv2) load balancer with invalid security groups This policy identifies Elastic Load Balancer v2 (ELBv2) load balancers that do not have security groups with a valid inbound or outbound rule. A security group with no inbound/outbound rule will deny all incoming/outgoing requests. ELBv2 security groups should have at least one inbound and outbound rule, ELBv2 with no inbound/outbound permissions will deny all traffic incoming/outgoing to/from any resources configured behind that ELBv2; in other words, the ELBv2 is useless without inbound and outbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on each security group, it will open Security Group properties in a new tab in your browser.\n6. For to check the Inbound rule, Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELBv2 functional requirement.\n8. For to check the Outbound rule, Click on the 'Outbound Rules'\n9. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELBv2 functional requirement.\n10. Click on 'Save'.
```config from cloud.resource where cloud.account = 'AWS Account' AND api.name = 'aws-ec2-describe-instances' AND json.rule = instanceId exists```
nsk_config_ec2 This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (backupRetentionPeriod does not exist or backupRetentionPeriod less than 7)```
AWS RDS retention policy less than 7 days RDS Retention Policies for Backups are an important part of your DR/BCP strategy. Recovering data from catastrophic failures, malicious attacks, or corruption often requires a several day window of potentially good backup material to leverage. As such, the best practice is to ensure your RDS clusters are retaining at least 7 days of backups, if not more (up to a maximum of 35). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Configure your RDS backup retention policy to at least 7 days.\n\n1. Go to the AWS console RDS dashboard.\n2. In the navigation pane, choose Instances.\n3. Select the database instance you wish to configure.\n4. Click on 'Modify'.\n5. Scroll down to Additional Configuration and set the retention period to at least 7 days under 'Backup retention period'.\n6. Click Continue.\n7. Under 'Scheduling of modifications' choose 'When to apply modifications'\n8. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-target-https-proxies' AND json.rule = 'sslPolicy does not exist or sslPolicy is empty'```
GCP Load balancer HTTPS target proxy configured with default SSL policy instead of custom SSL policy This policy identifies Load balancer HTTPS target proxies which are configured with default SSL Policy instead of custom SSL policy. It is a best practice to use custom SSL policy to access load balancers. It gives you closer control over SSL/TLS versions and ciphers. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. For 'SSL policy', choose any custom SSL policy other than 'GCP default'\n11. Click on 'Done'\n12. Click on 'Update'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = properties.publicNetworkAccess does not exist or properties.publicNetworkAccess is true```
Azure Automation account configured with overly permissive network access This policy identifies Automation accounts configured with overly permissive network access. It is recommended to configure the Automation account with private endpoints so that the Automation account is accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Automation Account dashboard \n3. Click on the reported Automation account\n4. Under the 'Account Settings' menu, click on 'Networking'\n5. In 'Public access' tab, select 'Disable' for 'Public network access' \n6. In 'Private access' tab, Create a private endpoint with required parameters \n7. Click on 'Apply'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and httpsTrigger exists and httpsTrigger.securityLevel does not equal SECURE_ALWAYS```
GCP Cloud Function HTTP trigger is not secured This policy identifies GCP Cloud Functions for which the HTTP trigger is not secured. When you configure HTTP functions to be triggered only with HTTPS, user requests will be redirected to use the HTTPS protocol, which is more secure. It is recommended to set the 'Require HTTPS' for configuring HTTP triggers while deploying your function. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Under section 'Trigger', click on 'EDIT'\n6. Select the checkbox against the field 'Require HTTPS'\n7. Click on 'SAVE'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Udp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```
Azure Network Security Group having Inbound rule overly permissive to all traffic on UDP protocol This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on UDP protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with "gke-") and metadata.items[?any(key contains "serial-port-logging-enable" and value equals "true")] exists```
GCP VM instance serial port output logging is enabled This policy identifies GCP VM instances that have serial port output logging enabled. The serial console feature in the VM instance does not support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. When the serial port output logging feature is enabled, the serial port output is retained even after an instance is stopped or deleted. It is recommended to disable serial port access and serial port output logging for all VM instances to avoid leakage of potentially sensitive data. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable serial port output logging on existing GCP VM instance, follow the below URL:\nhttps://cloud.google.com/compute/docs/troubleshooting/viewing-serial-port-output#enable-stackdriver.
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration exists) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode contains SSE) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode does not contain KMS)' ; show X;```
AWS EMR cluster is not configured with SSE KMS for data at rest encryption (Amazon S3 with EMRFS) This policy identifies EMR clusters which are not configured with Server Side Encryption(SSE KMS) for data at rest encryption of Amazon S3 with EMRFS. As a best practice, use SSE-KMS for server side encryption to encrypt the data in your EMR cluster and ensure full control over your data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. For encryption At Rest click the checkbox for 'Enable at-rest encryption for EMRFS data in Amazon S3'\n8. From the dropdown 'Default encryption mode' select 'SSE-KMS'. Follow below link for configuration steps.\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu\n18. On the 'Terminate clusters' pop-up, click 'Terminate'..
```config from cloud.resource where api.name = 'aws-appsync-graphql-api' AND json.rule = wafWebAclArn is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.wafWebAclArn'; show X;```
AWS AppSync attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AppSync attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, AppSync attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the AppSync console\n3. Click on the reported AppSync\n4. Choose 'Settings' in the navigation pane\n5. In the Web application firewall section, note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-rds-instance' AND json.rule = 'Items[*].securityIPList contains 0.0.0.0/0 or Items[*].securityIPList contains 127.0.0.1'```
Alibaba Cloud ApsaraDB RDS allowlist group is not restrictive This policy identifies ApsaraDB for Relational Database Service (RDS) allowlist groups which are not restrictive. The value 0.0.0.0/0 indicates that all devices can access the RDS instance and The value 127.0.0.1 is the default IP address means that no devices can access the RDS instance. As a best practice, It is recommended that you periodically check and adjust your allowlists to maintain RDS security. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to ApsaraDB for RDS\n3. In the left-side navigation pane, click on 'Instances' \n4. Choose the reported instance, click on 'Manage'\n5. In the left-side navigation pane, click on 'Data Security'\n6. In the 'Data Security' section, click 'Edit' on the allow list setting which has IP address 127.0.0.1 or 0.0.0.0/0 and update the restrictive IP address in the box as per your requirement. \n7. Click on 'Ok'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals SqlServerVirtualMachines and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud is set to Off for SQL servers on machines This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for SQL servers on machines is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for SQL servers on machines. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'SQL servers on machines' Select 'On' under Plan.\n8. Select 'Save'.
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-volumes' as Y; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Z; filter "$.X.blockDeviceMappings[*].ebs.volumeId == $.Y.volumeId and $.Y.encrypted contains true and $.Y.kmsKeyId equals $.Z.key.keyArn and $.Z.keyMetadata.keyManager contains AWS and $.X.tags[?(@.key=='Name')].value does not contain CSR"; show Y; ```
Morgan_Stanley_custom_policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.endpointProtectionMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'```
Azure Microsoft Defender for Cloud endpoint protection monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have endpoint protection monitoring set to disabled. Enabling endpoint Protection will make sure that any issues or shortcomings in endpoint protection for all Microsoft Windows virtual machines are identified so that they can, in turn, be removed. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Monitor missing Endpoint Protection in Azure Security Center' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' AND json.rule = 'cloudWatchLogsRoleArn equals null or cloudWatchLogsRoleArn does not exist'```
AWS CloudTrail trail logs is not integrated with CloudWatch Log This policy identifies AWS CloudTrail which has trail logs that are not integrated with CloudWatch Log. Enabling the CloudTrail trail logs integrated with CloudWatch Logs will enable the real-time as well as historic activity logging. This will further improve monitoring and alarm capability. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Admin Console and access the CloudTrail service.\n2. Click on the Trails in the left hand menu.\n3. Click on the identified CloudTrail and navigate to the 'CloudWatch Logs' section.\n4. Click on 'Configure' tab and provide required\n5. Provide a log group name in field 'New or existing log group'\n6. Click on 'Continue'\n7. In the next page from 'IAM role' dropdown select an IAM role with required access or select the 'Create a new IAM role'\n8. Click on 'Allow'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = secrets[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is false```
Azure Key Vault secret has no expiration date (Non-RBAC Key vault) This policy identifies Azure Key Vault secrets that do not have an expiry date for the Non-RBAC Key vaults. As a best practice, set an expiration date for each secret and rotate the secret regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].name' | xargs -I {} az keyvault set-policy --name {} --certificate-permissions list listissuers --key-permissions list --secret-permissions list --spn <prismacloud_app_id> This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Key vaults'\n3. Select the Key vault instance where the secrets are stored\n4. Select 'Secrets', and select the secret that you need to modify\n5. Select the current version\n6. Set the expiration date\n7. 'Save' your changes.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireUppercaseCharacters does not exist or requireUppercaseCharacters is false'```
Alibaba Cloud RAM password policy does not have an uppercase character This policy identifies Alibaba Cloud accounts that do not have an uppercase character in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Upper-Case Letter'\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains "resource.type =" or $.X.filter contains "resource.type=") and ($.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=") and $.X.filter contains "gce_network" and ($.X.filter contains "jsonPayload.event_subtype=" or $.X.filter contains "jsonPayload.event_subtype =") and ($.X.filter does not contain "jsonPayload.event_subtype!=" and $.X.filter does not contain "jsonPayload.event_subtype !=") and $.X.filter contains "compute.networks.insert" and $.X.filter contains "compute.networks.patch" and $.X.filter contains "compute.networks.delete" and $.X.filter contains "compute.networks.removePeering" and $.X.filter contains "compute.networks.addPeering"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for VPC network changes This policy identifies the GCP account which does not have a log metric filter and alert for VPC network changes. Monitoring network insertion, patching, deletion, removePeering and addPeering activities will help in identifying VPC traffic flow is not getting impacted. It is recommended to create a metric filter and alarm to detect activities related to the insertion, patching, deletion, removePeering and addPeering of VPC network. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gce_network" AND jsonPayload.event_subtype="compute.networks.insert" OR jsonPayload.event_subtype="compute.networks.patch" OR jsonPayload.event_subtype="compute.networks.delete" OR jsonPayload.event_subtype="compute.networks.removePeering" OR jsonPayload.event_subtype="compute.networks.addPeering"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = KmsMasterKeyId exists and KmsMasterKeyId equal ignore case "alias/aws/sns"```
AWS SNS Topic not encrypted by Customer Managed Key (CMK) This policy identifies AWS SNS Topics that are not encrypted by Customer Managed Key (CMK). AWS SNS Topics are used to send notifications to subscribers and might contain sensitive information. SNS Topics are encrypted by default by a AWS managed key but users can specify CMK to get enhanced security, control over the encryption key and also comply with any regulatory requirements. As a security best practice use of CMK to encrypt your SNS Topics is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the Amazon SNS Dashboard\n4. Click on 'Topics'\n5. Click on the reported Topic\n6. Click on 'Edit' button from the console top menu to access the topic configuration settings.\n7. Select the 'Encryption – optional', Ensure that Enable encryption option is selected.\n8. Select the 'AWS KMS key' from the box other than default '(Default) alias/aws/sns' key based on your business requirement.\n9. Choose 'Save changes' to apply the configuration changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.ipRangeFilter is empty```
Azure Cosmos DB IP range filter not configured This policy identifies Azure Cosmos DB with IP range filter not configured. Azure Cosmos DB should be restricted access from All Networks. It is recommended to add defined set of IP / IP range which can access Azure Cosmos DB from the Internet. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Azure Cosmos DB\n3. Select the reported Cosmos DB resource \n4. Click on 'Firewall and virtual networks' under 'Settings'\n5. Click on 'Selected networks' radio button\n6. Under 'Firewall' add IP ranges\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule = 'disabled is false and direction equals INGRESS and allowed[*] exists and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and targetTags[*] does not exist and targetServiceAccounts[*] does not exist'```
GCP Firewall rule allows inbound traffic from anywhere with no specific target set This policy identifies GCP Firewall rules which allow inbound traffic from anywhere with no target filtering. The default target is all instances in the network. The use of target tags or target service accounts allows the rule to apply to select instances. Not using any firewall rule filtering may allow a bad actor to brute force their way into the system and potentially get access to the entire network. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the instructions below to restrict the default target parameter (all instances in the network):\n\n1. Login to GCP Console.\n2. Go to VPC Network.\n3. Go to the Firewall rules.\n4. Click on each Firewall rule reported.\n5. Click Edit.\n6. Change the Targets field from 'All instances in the network' to 'Specified target tags' or 'Specified service account'.\n7. Type the target tag/target service account into the Target tags/Target service account field respectively.\n8. Review Source IP ranges and change to specific IP ranges if traffic is not required to be allowed from anywhere.\n9. Click Save.\n\nReference:\nhttps://cloud.google.com/vpc/docs/add-remove-network-tags.
```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 90) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 90) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)))'```
Informational - AWS access keys not used for more than 90 days This policy identifies IAM users for which access keys are not used for more than 90 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 90 days, then that access key needs to be deleted (even though the access key is inactive). This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-secretsmanager-secret' AND json.rule = replication.userManaged.replicas[*].customerManagedEncryption.kmsKeyName does not exist and replication.automatic.customerManagedEncryption.kmsKeyName does not exist```
GCP Secrets Manager secret not encrypted with CMEK This policy identifies GCP Secret Manager secrets that are not encrypted with a Customer-Managed Encryption Key (CMEK). GCP Secret Manager securely stores and manages access to API keys, passwords, certificates, and other sensitive information. Using CMEK for secrets gives you complete control over the encryption keys protecting your sensitive data, ensuring that only authorized users with access to these keys can decrypt and access the information. Without CMEK, data is encrypted with Google-managed keys, which may not provide the level of control required for handling sensitive data in regulated industries. It is recommended to encrypt Secret Manager secrets with a Customer-Managed Encryption Key (CMEK) for enhanced data control and compliance. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Secrets Manager' page\n2. Under 'Secrets', click on the reported secret\n3. Select 'EDIT SECRET' on the top navigation bar\n4. Under the 'Edit secret' page, under 'Encryption', select the 'Customer-managed encryption key (CMEK)' radio button and Select a CMEK key for each location\n5. Click on 'UPDATE SECRET'..
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and ($.X.filterPattern contains "eventSource=" or $.X.filterPattern contains "eventSource =") and ($.X.filterPattern does not contain "eventSource!=" and $.X.filterPattern does not contain "eventSource !=") and $.X.filterPattern contains s3.amazonaws.com and $.X.filterPattern contains PutBucketAcl and $.X.filterPattern contains PutBucketPolicy and $.X.filterPattern contains PutBucketCors and $.X.filterPattern contains PutBucketLifecycle and $.X.filterPattern contains PutBucketReplication and $.X.filterPattern contains DeleteBucketPolicy and $.X.filterPattern contains DeleteBucketCors and $.X.filterPattern contains DeleteBucketLifecycle and $.X.filterPattern contains DeleteBucketReplication) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for S3 bucket policy changes This policy identifies the AWS regions which do not have a log metric filter and alarm for S3 bucket policy changes. Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'automaticFailover equals disabled or automaticFailover does not exist'```
AWS ElastiCache Redis cluster with Multi-AZ Automatic Failover feature set to disabled This policy identifies ElastiCache Redis clusters which have Multi-AZ Automatic Failover feature set to disabled. It is recommended to enable the Multi-AZ Automatic Failover feature for your Redis Cache cluster, which will improve primary node reachability by providing read replica in case of network connectivity loss or loss of availability in the primary's availability zone for read/write operations. Note: Redis cluster Multi-AZ with automatic failover does not support T1 and T2 cache node types and is only available if the cluster has at least one read replica. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box,\na. Set 'Multi-AZ' to 'Yes'\nb. Select 'Apply Immediately' checkbox, to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\nc. Click on 'Modify'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = properties.status equals "Active" and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)```
Azure Service bus namespace not configured with Azure Active Directory (Azure AD) authentication This policy identifies Service bus namespaces that are not configured with Azure Active Directory (Azure AD) authentication and are enabled with local authentication. Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there's no need to store the tokens in your code and risk potential security vulnerabilities. It is recommended to configure the Service bus namespaces with Azure AD authentication so that all actions are strongly authenticated. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure Azure Active Directory (Azure AD) authentication and disable local authentication on existing Service bus, follow below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/disable-local-authentication.
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = "((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))" as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter'$.X.bucketName equals $.Y.s3BucketName'; show X;```
Copy of AWS CloudTrail bucket is publicly accessible This policy identifies publicly accessible S3 buckets that store CloudTrail data. These buckets contains sensitive audit data and only authorized users and applications should have access. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save\n6. If 'Bucket Policy' is set to public follow below steps\na. Under 'Bucket Policy', modify the policy to remove public access\nb. Click on Save\nc. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains DeleteGroupPolicy and $.X.filterPattern contains DeleteRolePolicy and $.X.filterPattern contains DeleteUserPolicy and $.X.filterPattern contains PutGroupPolicy and $.X.filterPattern contains PutRolePolicy and $.X.filterPattern contains PutUserPolicy and $.X.filterPattern contains CreatePolicy and $.X.filterPattern contains DeletePolicy and $.X.filterPattern contains CreatePolicyVersion and $.X.filterPattern contains DeletePolicyVersion and $.X.filterPattern contains AttachRolePolicy and $.X.filterPattern contains DetachRolePolicy and $.X.filterPattern contains AttachUserPolicy and $.X.filterPattern contains DetachUserPolicy and $.X.filterPattern contains AttachGroupPolicy and $.X.filterPattern contains DetachGroupPolicy) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for IAM policy changes This policy identifies the AWS regions which do not have a log metric filter and alarm for IAM policy changes. Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(user does not contain appspot.gserviceaccount.com and user does not contain developer.gserviceaccount.com and user does not contain cloudservices.gserviceaccount.com and user does not contain system.gserviceaccount.com and user does not contain cloudbuild.gserviceaccount.com) and (roles contains roles/editor or roles contains roles/owner)'```
GCP IAM primitive roles are in use This policy identifies GCP IAM users assigned with primitive roles. Primitive roles are Roles that existed prior to Cloud IAM. Primitive roles (owner, editor) are built-in and provide a broader access to resources making them prone to attacks and privilege escalation. Predefined roles provide more granular controls than primitive roles and therefore Predefined roles should be used. Note: For a new GCP project, service accounts are assigned with role/editor permissions. GCP recommends not to revoke the permissions on the SA account. Reference: https://cloud.google.com/iam/docs/service-accounts Limitation: This policy alerts for Service agents which are Google-managed service accounts. Service Agents are by default assigned with some roles by Google cloud and these roles shouldn't be revoked. Reference: https://cloud.google.com/iam/docs/service-agents In case any specific service agent needs to be bypassed, this policy can be cloned and modified accordingly This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: Review the projects / resources that have Primitive roles assigned to them and replace them with equivalent Predefined roles.\nNote: This policy alerts for Service agents which are Google-managed service accounts. Service Agents are by default assigned with some roles by Google cloud and these roles shouldn't be revoked.\nReference: https://cloud.google.com/iam/docs/service-agents\nDo not revoke the roles that are granted to service agents. If you revoke these roles, some Google Cloud services will no longer work..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-credential-user-registration-details' AND json.rule = isMfaRegistered is false as X; config from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = accountEnabled is true as Y; filter '$.X.userDisplayName equals $.Y.displayName'; show X;```
Custom AlertRule Azure AD MFA is not enabled for the user This policy identifies Azure users for whom AD MFA (Active Directory Multi-Factor Authentication) is not enabled. Azure AD MFA is a simple best practice that adds an extra layer of protection on top of your user name and password. MFA provides increased security for your Azure account settings and resources. Enabling Azure AD Multi-Factor Authentication using Conditional Access policies is the recommended approach to protect users. For more details: https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To enable per-user Azure AD Multi-Factor Authentication; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```
AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = publicNetworkAccess equal ignore case Enabled and networkAccessPolicy equal ignore case AllowAll and managedBy contains virtualMachines```
Azure VM disk configured with overly permissive network access This policy identifies Azure Virtual Machine disks that are configured with overly permissive network access. Enabling public network access provides overly permissive network access on Azure Virtual Machine disks, increasing the risk of unauthorized access and potential security breaches. Public network access exposes sensitive data to external threats, which attackers could exploit to compromise VM disks. Disabling public access and using Azure Private Link reduces exposure, ensuring only trusted networks have access and enhancing the security of your Azure environment by minimizing the risk of data leaks and breaches. As a security best practice, it is recommended to disable public network access for Azure Virtual Machine disks. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Disks'\n3. Click on the reported disk\n4. Under 'Settings', go to 'Networking'\n5. Ensure that Network access is NOT set to 'Enable public access from all networks'\n6. Click 'Save'.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "databases-for-postgresql" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resourceGroupId","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Databases for PostgreSQL service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for 'Databases for PostgreSQL' service. Service ID has full platform control as an administrator for 'Databases for PostgreSQL' service, including the ability to assign other users access policies and modify deployment passwords. If a Service ID with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'Databases for PostgreSQL' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(137,137) or destinationPortRanges[*] contains _Port.inRange(137,137) ))] exists```
Azure Network Security Group allows all traffic on NetBIOS (UDP Port 137) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on NetBIOS (UDP Port 137). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict NetBIOS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.networkAcls.defaultAction does not exist or properties.networkAcls.defaultAction equal ignore case Allow)```
Azure Cognitive Services account configured with public network access This policy identifies Azure Cognitive Services accounts configured with public network access. Overly permissive public network access allows access to resource through the internet using a public IP address. It is recommended to restrict IP ranges to allow access to your cognitive Services account and endpoint from specific public internet IP address ranges and is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restirct internet IP ranges on your existing Cognitive Services account, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-virtual-networks?tabs=portal.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowCrossTenantReplication exists and properties.allowCrossTenantReplication is true```
Azure Storage account with cross tenant replication enabled This policy identifies Azure Storage accounts that are enabled with cross tenant replication. Azure Storage account cross tenant replication allows data to be replicated across multiple Azure tenants. Though this feature is beneficial for data availability it also poses a significant security risk if not properly managed. Possible risks include unauthorized access to data, data leaks, and compliance violations. Disabling Cross Tenant Replication reduces the risk of unauthorized data access and prevents the accidental sharing of sensitive information. As best practice, it is recommended to disable cross tenant replication on your storage accounts. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage Account dashboard\n3. Click on the reported Storage Account\n4. Under 'Data management', select 'Object replication'\n5. Select 'Advanced settings'\n6. Uncheck 'Allow cross-tenant replication'\n7. Click on 'OK'.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case access and roles[?any( role_id is member of (crn:v1:bluemix:public:iam::::role:Administrator,crn:v1:bluemix:public:iam::::role:Editor,crn:v1:bluemix:public:iam::::role:Viewer ) )] exists and resources[?any( attributes[?any( value equal ignore case support and operator is member of (stringEquals, stringMatch))] exists)] exists and subjects[?any( attributes[?any( value contains AccessGroupId)] exists )] exists as X; count(X) less than 1```
IBM Cloud Support Access Group to manage incidents has not been created This policy identifies IBM Cloud accounts with no access group to manage support incidents. Support cases are used to raise issues with IBM Cloud. Users with access to the IBM Cloud Support Center can create and/or manage support tickets based on their IAM role. Support Center access should be managed and assigned using Access Groups. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. In the IBM Cloud console, Under the 'Manage' dropdown click on 'Access (IAM)', and then select Access Groups.\n2. Select 'Create Access Group'.\n3. Give the Access Group a descriptive name, for example, Support Center Viewers or Support Center Admins.\n4. Optionally, provide a brief description.\n5. Click 'Create'.\n6. Once the Access Group is created, click on the 'Access' tab.\n7. Click 'Assign Access'. Under the 'Service' section search for 'Support Center' and select.\n8. Under 'Resources' select All Resources.\n9. Select the Support Center role(s) higher than the viewer.\n10. Click add.\n11. Click Assign.\n12. Click on the 'Users' tab.\n13. Click Add users\n14. Select users from the list and click 'Add to group'..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(25,25) or destinationPortRanges[*] contains _Port.inRange(25,25) ))] exists```
Azure Network Security Group allows all traffic on SMTP (TCP Port 25) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SMTP (TCP Port 25). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SMTP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy ajtmu This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-queue-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;```
Azure Storage Logging is not Enabled for Queue Service for Read Write and Delete requests This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'oci-networking-subnet' as X; config from cloud.resource where api.name = 'oci-logging-logs' AND json.rule = lifecycleState equals ACTIVE and isEnabled is true and configuration.source.service contains flowlogs as Y; filter 'not ($.X.id contains $.Y.configuration.source.resource)'; show X;```
OCI VCN subnet flow logging is disabled This policy identifies Virtual Cloud Network (VCN) subnets that have flow logs disabled. Enabling VCN flow logs enables you to monitor traffic flowing within your virtual network and can be used to detect anomalous traffic. Without the flow logs turned on, it is not possible to get any visibility into network traffic. It is recommended to enable a VCN flow log on each of your VCN subnets. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure VCN flow log for reported subnet, follow below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/Network/Tasks/vcn-flow-logs-enable.htm#vcn-flow-logs-enable.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and (engine does not contain aurora and engine does not contain sqlserver and engine does not contain docdb) and (multiAZ is false or multiAZ does not exist)```
AWS RDS instance with Multi-Availability Zone disabled This policy identifies RDS instances which have Multi-Availability Zone(Multi-AZ) disabled. When RDS DB instance is enabled with Multi-AZ, RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different availability zone. These Multi-AZ deployments will improve primary node reachability by providing read replica in case of network connectivity loss or loss of availability in the primary’s availability zone for read/write operations, so by making them the best fit for production database workloads. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose Instances, and then select the reported DB instance\n5. Click on 'Modify'\n6. In 'Availability & durability' section for the 'Multi-AZ Deployment', select 'Create a standby instance'\n7. Click on 'Continue'\n8. Under 'Scheduling of modifications' choose 'When to apply modifications'\n9. On the confirmation page, Review the changes and Click on 'Modify DB Instance' to save your changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'enableKubernetesAlpha is true'```
GCP Kubernetes Engine Clusters have Alpha cluster feature enabled This policy identifies GCP Kubernetes Engine Clusters which have enabled alpha cluster. It is recommended to not use alpha clusters or alpha features for production workloads. Alpha clusters expire after 30 days and do not receive security updates. This cluster will not be covered by the Kubernetes Engine SLA. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Engine Clusters alpha feature cannot be disabled once it is created. So to resolve this alert, create a new cluster with the alpha feature disabled, then migrate all required cluster data from the reported cluster to this newly created cluster and delete reported Kubernetes engine cluster.\n\nTo create new Kubernetes engine cluster with the alpha feature disabled, perform the following: \n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Set new cluster parameters as per your requirement and make sure 'Enable Kubernetes alpha features in this cluster' is unchecked.\n6. Click on Save\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster..
```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-compute-firewall-rules-list' AND json.rule = 'disabled is false and (name equals default-allow-ssh or name equals default-allow-icmp or name equals default-allow-internal or name equals default-allow-rdp) and (deleted is false) and (sourceRanges[*] contains 0.0.0.0/0 or sourceRanges[*] contains ::/0)'```
GCP Default Firewall rule is overly permissive (except http and https) This policy identifies the Firewall rules that are configured with default firewall rule. The default Firewall rules will apply to all instances by default in the absence of specific custom rules with higher priority. It is a safe practice to not have these rules in the default Firewall. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to 'VPC network' Under service 'NETWORKING'\n3. Click on section 'Firewall' on left panel\n4. For 'default' rule, apply filter 'Name : default-',\n5. select all the rules which start with 'default-' (except http, https) and click on 'DELETE' icon\n6. On pop-up window, click on 'DELETE'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = kind starts with app and (identity.type does not exist or (identity.type exists and identity.type does not contain SystemAssigned and identity.type does not contain UserAssigned))```
Azure App Service Web app doesn't have a Managed Service Identity This policy identifies Azure App Services that are not configured with managed service identity. Managed Service Identity in App Service makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need for username and passwords. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure managed service identity on your reported App Service, follow the below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = 'keyMetadata.origin contains EXTERNAL and keyMetadata.keyManager contains CUSTOMER and keyMetadata.enabled is true and (_DateTime.ageInDays($.keyMetadata.validTo) > -30)'```
AWS KMS customer managed external key expiring in 30 days or less This policy identifies KMS customer managed external keys which are expiring in 30 days or less. As a best practice, it is recommended to reimport the same key material and specifying a new expiration date. If the key material expires, AWS KMS deletes the key material and the customer managed external key becomes unusable. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS) Dashboard\n4. Click on Customer managed keys (Left Panel)\n5. Click on reported KMS Customer managed key\n6. Under 'Key material' section, Delete the existing key material before you reimport the key material by clicking on 'Delete key material'\n7. Click on 'Upload key material'\n8. Under 'Encrypted key material and import token' section, Reimport same encrypted key material and import token\n9. Under 'Expiration option', Select 'Key material expires' and choose new expiration date in 'Key material expires at' date box\n10. Click on 'Upload key material' button\nNOTE: Deleting key material makes all data encrypted under the customer master key (CMK) unrecoverable unless you later import the same key material into the CMK. The CMK is not affected by this operation..
```config from cloud.resource where api.name = 'aws-rds-db-cluster' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '($.X.storageEncrypted is true) and ($.X.kmsKeyId equals $.Y.key.keyArn) and ($.Y.keyMetadata.keyManager does not contain CUSTOMER)' ; show X;```
AWS RDS DB cluster is encrypted using default KMS key instead of CMK This policy identifies RDS DB(Relational Database Service Database) clusters which are encrypted using default KMS key instead of CMK (Customer Master Key). As a security best practice CMK should be used instead of default KMS key for encryption to gain the ability to rotate the key according to your own policies, delete the key, and control access to the key via KMS policies and IAM policies. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: RDS DB clusters can be encrypted only while creating the database cluster. You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS encryption key when you restore from the unencrypted DB cluster snapshot.\n\nStep 1: To create a 'Snapshot' of the unencrypted DB cluster,\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available'.\n\nStep 2: Follow the below link to restoring the Cluster from a DB Cluster Snapshot,\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RestoreFromSnapshot.html\n\nOnce the DB cluster is restored and verified, follow below steps to delete the reported DB cluster,\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'RDS' dashboard from 'Services' dropdown\n4. In the navigation pane, choose ‘Databases’\n5. In the list of DB instances, choose a writer instance for the DB cluster\n6. Choose 'Actions', and then choose 'Delete'\nFMI:\n1. While deleting a RDS DB cluster, customer has to disable 'Enable deletion protection' otherwise instance cannot be deleted\n2. While deleting RDS DB instance , AWS application will ask the end user to take Final snapshot\n3. If a RDS DB cluster has a writer role instance, then User has to delete the write instance to delete the main cluster (Delete option won’t be enabled for main RDS DB cluster).
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = firewallRules.value[*].properties.startIpAddress equals "0.0.0.0" or firewallRules.value[*].properties.endIpAddress equals "0.0.0.0"```
EIP-CSE-IACOHP-AzurePostgreSQL-NetworkAccessibility-eca1500-51 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false as X; config from cloud.resource where api.name = 'aws-route53-query-logging-config' as Y; filter ' not ($.X.hostedZone.id equals $.Y.HostedZoneId) ' ; show X;```
AWS Route53 public Hosted Zone query logging is not enabled This policy identifies the AWS Route53 public hosted zones DNS query logging is not enabled. Enabling DNS query logging for an AWS Route 53 hosted zone enhances DNS security and compliance by providing visibility into DNS queries. When enabled, Route 53 sends these log files to Amazon CloudWatch Logs. Disabling DNS query logging for AWS Route 53 limits visibility into DNS traffic, hampering anomaly detection, compliance efforts, and effective incident response. It is recommended to enable logging for all public hosted zones to enhance the visibility and compliance requirements. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure logging for DNS queries for the Hosted zone, perform the following actions:\n\n1. Sign in to the AWS Management Console and open the Route 53 console\n2. In the navigation pane, choose 'Hosted zones'\n3. Choose the hosted zone that is reported\n4. In the Hosted zone details pane, choose 'Configure query logging'\n5. Choose an existing log group or create a new log group from the 'Log group' section drop-down\n6. Choose 'Permissions - optional' to see a table that shows whether the resource policy matches the CloudWatch log group, and whether Route 53 has permission to publish logs to CloudWatch\n7. Choose 'Create'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(53,53) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on DNS port (53) This policy identifies GCP Firewall rules which allow all inbound traffic on DNS port (53). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the DNS port (53) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where api.name = 'aws-account-management-alternate-contact' group by account as X; filter ' AlternateContactType is not member of ("SECURITY") ' ;```
mnm test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-acm-pca-certificate-authority' AND json.rule = Type equal ignore case ROOT and Status equal ignore case active```
AWS Private CA root certificate authority is enabled This policy identifies enabled AWS Private CA root certificate authorities. AWS Private CA enables creating a root CA to issue private certificates for securing internal resources like servers, applications, users, devices, and containers. The root CA should be disabled for daily tasks to minimize risk, as it should only issue certificates for intermediate CAs, allowing it to remain secure while intermediate CAs handle the issuance of end-entity certificates. It is recommended to disable the AWS Private CA root certificate authority to secure. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the status of the Private CA root certificate authority:\n\n1. Sign in to your AWS account and open the AWS Private CA console\n2. On the 'Private certificate authorities' page, choose the reported private CA\n3. On the 'Actions' menu, choose 'Disable' to disable the private CA..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'MFADevice is empty'```
Alibaba Cloud MFA is disabled for RAM user This policy identifies Resource Access Management (RAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your Alibaba Cloud account settings and resources. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Select the reported user\n5. In the 'Authentication' tab, Click on 'Modify Logon Settings'\n6. Choose the 'Required' radio button for 'Enable MFA' \n7. Click on 'OK'\n8. Click on 'Close'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-group' AND json.rule = mail contains 42```
dnd_test_create_hyperion_policy_multi_cloud_child_policies_ss_finding_2 Description-4ee38fa0-9684-4c83-b917-035b88e2e243 This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = versioning equals Disabled```
OCI Object Storage Bucket has object Versioning disabled This policy identifies the OCI Object Storage buckets that are not configured with a Object Versioning. It is recommended that Object Storage buckets should be configured with Object Versioning to minimize data loss because of inadvertent deletes by an authorized user or malicious deletes. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Next to Object Versioning, click Edit.\n5. In the dialog box, Clink Enable Versioing (to enable)..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case AgentlessVmScanning AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)```
Azure Microsoft Defender for Cloud set to Off for Agentless scanning for machines This policy identifies Azure Microsoft Defender for Cloud where the Agentless scanning for machines is set to Off. Agentless scanning uses disk snapshots to detect installed software, vulnerabilities, and plain text secrets without needing agents on each machine. When disabled, your environment risks exposure to software vulnerabilities and unauthorized software, diminishing visibility into security issues. Enabling Agentless scanning improves security by identifying vulnerabilities and sensitive data with minimal performance impact, streamlining management and ensuring strong threat detection and compliance. As a security best practice, it is recommended to enable Agentless scanning for machines in Azure Microsoft Defender for Cloud. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'Agentless scanning for machines' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverAdmins does not exist or serverAdmins[*] size equals 0 or (serverAdmins[*].properties.administratorType exists and serverAdmins[*].properties.administratorType does not equal ActiveDirectory and serverAdmins[*].properties.login is not empty)```
Dikla test This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.