query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where api.name = 'gcloud-compute-external-backend-service' AND json.rule = iap does not exist or iap.enabled equals "false"```
GCP Identity-Aware Proxy (IAP) not enabled for External HTTP(s) Load Balancer This policy identifies GCP External HTTP(s) Load Balancers for which Identity-Aware Proxy(IAP) is disabled. IAP is used to enforce access control policies for applications and resources. It works with signed headers or the App Engine standard environment Users API to secure connections to External HTTP(s) Load Balancers. It is recommended to enable Identity-Aware Proxy for securing the External HTTP(s) Load Balancers. Reference: https://cloud.google.com/iap/docs/concepts-overview This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable IAP on the external HTTP(S) load balancer:\n\nhttps://cloud.google.com/iap/docs/load-balancer-howto#enable-iap.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = "not ( diagnosticSettings.value[*].properties.logs[*].enabled any equal true and diagnosticSettings.value[*].properties.logs[*].enabled size greater than 0 )"```
Azure Key Vault audit logging is disabled This policy identifies Azure Key Vault instances for which audit logging is disabled. As a best practice, enable audit event logging for Key Vault instances to monitor how and when your key vaults are accessed, and by whom. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Select 'Key vaults'\n3. Select the key vault instance to modify\n4. Select 'Diagnostic settings' under 'Monitoring'\n5. Click on '+Add diagnostic setting'\n6. Specify a 'Diagnostic settings name',\n7. Under 'Category details' section, Under Log, select 'AuditEvent'\n8. Under section 'Destination details',\na. If you select 'Send to Log Analytics workspace', set the 'Subscription' and 'Log Analytics workspace'\nb. If you select 'Archive to storage account', set the 'Subscription', 'Storage account' and 'Retention (days)'\nc. If you select 'Stream to an event hub', set the 'Subscription', 'Event hub namespace', 'Event hub name' and 'Event hub policy name'\nd. If you select 'Send to partner solution', set the 'Subscription' and 'Destination'\n9. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-aiplatform-endpoint' AND json.rule = encryptionSpec.kmsKeyName does not exist```
GCP Vertex AI Endpoint not encrypted with CMEK This policy identifies GCP Vertex AI Endpoints that are not encrypted with CMEK. Customer Managed Encryption Keys (CMEK) for a Vertex AI Endpoint provide control over the encryption of data at rest. Encrypting GCP Vertex AI Endpoints with CMEK enhances security by giving you full control over encryption keys. This ensures data protection, especially for sensitive models and predictions. CMEK allows key rotation and revocation, aligning with compliance requirements and offering better data privacy management. It is recommended to use CMEK for Vertex AI Endpoint encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Vertex AI Endpoint encryption cannot be changed after creation. To make use of CMEK a new Endpoint can be created.\n\nTo create a new Vertex AI Endpoint, please follow the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'DEPLOY AND USE', go to 'Online prediction'\n4. Select 'ENDPOINTS' tab\n5. Click 'CREATE'\n6. Configure the endpoint name and access as required\n7. Click on 'ADVANCED OPTIONS', and then select 'Cloud KMS key'\n8. Select the appropriate 'Key type' and then select the required CMEK\n9. Click 'CONTINUE'\n10. Configure the Model settings as required, click 'CONTINUE'\n11. Configure the Model monitoring as required, click 'CONTINUE'\n12. Click 'CREATE'\n\nTo delete an existing Vertex AI Endpoint, please follow the steps below:\n\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'DEPLOY AND USE', go to 'Online prediction'\n4. Select 'ENDPOINTS' tab\n5. Click on the alerting endpoint\n6. Click on 'View More' bottom (three dots) for any model from the list.\n7. Click 'Undeploy model from endpoint'\n8. In the Undeploy model from endpoint dialog, click 'Undeploy'\n9. Repeat step 6-8 for all models listed\n10. Go back to 'Online prediction' page\n11. Select the alerting endpoint checkbox\n12. Click 'DELETE'.
```config from cloud.resource where api.name = 'aws-glue-job' as X; config from cloud.resource where api.name = 'aws-glue-security-configuration' as Y; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Z; filter '$.X.SecurityConfiguration does not exist or ( $.X.SecurityConfiguration equals $.Y.name and ($.Y.encryptionConfiguration.s3Encryption[*].s3EncryptionMode does not equal "SSE-KMS" or ($.Y.encryptionConfiguration.s3Encryption[*].kmsKeyArn exists and $.Y.encryptionConfiguration.s3Encryption[*].kmsKeyArn equals $.Z.keyMetadata.arn)))' ; show X;```
AWS Glue Job not encrypted by Customer Managed Key (CMK) This policy identifies AWS Glue jobs that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using the CMK that is disabled. AWS Glue allows you to specify whether the data processed by the job should be encrypted when stored in data storage locations such as Amazon S3. To protect sensitive data from unauthorized access, users can specify CMK to get enhanced security, and control over the encryption key and also comply with any regulatory requirements. It is recommended to use a CMK to encrypt the AWS Glue job data as it provides complete control over the encrypted data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To encrypt data processed by AWS Glue jobs, configure encryption settings within the security configuration of the Glue job. Security configurations cannot be edited from the console, so we need to create a new security configuration with the necessary settings and attach it to the existing Glue job.\n\nTo add a security configuration using the AWS Glue console,\n\n1. Sign in to the AWS Management Console: Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the "Find Services" search box, type "Glue" and select "AWS Glue" from the search results.\n4. To add a security configuration using the AWS Glue console, choose 'Security Configurations' in the navigation pane.\n5. Choose 'Add security configuration'.\n6. on the Security configuration properties, Enter a unique security configuration name in the name text box.\n7. To Enable S3 encryption, select the checkbox under the 'Enable S3 encryption' section.\n8. Select the 'SSE-KMS' option in the 'Encryption mode' and choose an AWS KMS CMK key, or choose Enter a key ARN of the CMK and provide the ARN for the key that you are managing according to your business requirements.\n9. Click 'Create' to create a security configuration.\n\n\nTo add a security configuration to the existing glue job.\n\n1. Sign in to the AWS Management Console: Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the "Find Services" search box, type "Glue" and select "AWS Glue" from the search results.\n4. Choose the ETL jobs in the navigation pane.\n5. select the reported job under the Your Jobs section.\n6. select the Job details tab.\n7. select the newly created security configuration from the dropdown in the 'Security configuration' section under the 'Advance properties' dropdown.\n8. Click 'Save'.\n\nTo enable the KMS CMK key, please refer to the below link.\nhttps://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html#enabling-keys-console.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-compute' AND json.rule = properties.properties.state equal ignore case running and (properties.computeType equal ignore case ComputeInstance or properties.computeType equal ignore case AmlCompute ) and properties.disableLocalAuth is false```
Azure Machine Learning compute instance with local authentication enabled This policy identifies Azure Machine Learning compute instances that are using local authentication. Disabling local authentication improves security by mandating the use of Microsoft Entra ID for authentication. Local authentication can lead to security risks and unauthorized access. Using Microsoft Entra ID ensures a more secure and compliant authentication process. As a security best practice, it is recommended to disable local authentication and use Microsoft Entra ID for authentication. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Disabling local authentication on an existing Azure Machine Learning compute instance without deleting and recreating it is not supported. The recommended approach to secure your compute instance is to set it up without local authentication from the beginning.\n\nTo create a new compute instance without local authentication:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Manage' section, click on the 'Compute'\n7. Click 'New' to create a new compute instance\n8. In the 'Security' tab, under the 'Enable SSH access' section, leave the option disabled to turn off local authentication\n9. Select 'Review + Create' to create the compute instance.
```config from cloud.resource where api.name = 'oci-networking-networkloadbalancer' AND json.rule = lifecycleState equal ignore case "ACTIVE" and backendSets.*.backends is empty OR backendSets.*.backends equals "[]"```
OCI Network Load Balancer not configured with backend set This policy identifies OCI Network Load Balancers that have no backend set configured. A backend set is a crucial component of a Network Load Balancer, comprising a load balancing policy, a health check policy, and a list of backend servers. Without a backend set, the Network Load Balancer lacks the necessary configuration to distribute incoming traffic and monitor the health of backend servers. As best practice, it is recommended to properly configure the backend set for the Network Load Balancer to function effectively, distribute incoming data, and maintain the reliability of backend services. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Network Load Balancers with backend sets, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets_topic-Creating_Backend_Sets.htm#top.
```config from cloud.resource where api.name = 'aws-redshift-describe-clusters' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.encrypted is true and $.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;```
AWS Redshift Cluster not encrypted using Customer Managed Key This policy identifies Redshift Clusters which are encrypted with default KMS keys and not with Keys managed by Customer. It is a best practice to use customer managed KMS Keys to encrypt your Redshift databases data. Customer-managed CMKs give you more flexibility, including the ability to create, rotate, disable, define access control for, and audit the encryption keys used to help protect your data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption with Customer Managed Key on your Redshift cluster follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html.
```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = logging does not exist```
GCP Storage Bucket does not have Access and Storage Logging enabled This policy identifies storage buckets that do not have Access and Storage Logging enabled. By enabling access and storage logs on target Storage buckets, it is possible to capture all events which may affect objects within target buckets. It is recommended that storage Access Logs and Storage logs are enabled for every Storage Bucket. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the steps mentioned in the below link to enable Access and Storage logs using GSUTIL or JSON API.\nReference : https://cloud.google.com/storage/docs/access-logs.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status contains PENDING_VALIDATION'```
AWS Certificate Manager (ACM) contains certificate pending validation This policy identifies invalid certificates which are in AWS Certificate Manager. When your Amazon ACM certificates are not validated within 72 hours after the request is made, those certificates become invalid and you will have to request new certificates, which could cause interruption to your applications or services. Though AWS Certificate Manager automatically renews certificates issued by the service that is used with other AWS resources. However, the ACM service does not automatically renew certificates that are not currently in use or not associated anymore with other AWS resources. So the renewal process including validation must be done manually before these certificates become invalid. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To validate Certificates: \n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Validate your certificate for your domain using either Email or DNS validation, depending upon your certificate validation method.\n\nOR\n\nIf the certificate is not required you can delete that certificate. To delete invalid Certificates:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'\n\nNote: This alert will get auto-resolved, as the certificate becomes invalid in 72 hours. It is recommended to either delete or validate the certificate within the timeframe..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'accessKeys[*] size > 0 and accessKeys[*].status any equal Active and loginProfile[*] is not empty'```
Alibaba Cloud RAM user has both console access and access keys This policy identifies Resource Access Management (RAM) users who have both console access and access keys. When a RAM user is created, the Administrator can assign either console access or access keys or both. As a best practice, it is recommended to assign console access to users and access keys for system / API applications, but not both to the same RAM user. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Click on reported user\n5. Based on the requirement and company policy, either delete the access keys or Remove Logon Settings for the reported RAM user..
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "secrets-manager" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Secrets Manager service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for the Secrets Manager service. A Service ID with admin access will be able to perform all platform tasks for Secrets Manager, including the creation, modification, and deletion of Secrets Manager service instances, as well as the assignment of access policies to other users. On Secret Manager, there is a chance that sensitive data might be exposed in the underlying service if a Service ID with administrative rights is compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'Secrets Manager' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Sql/servers/firewallRules/delete" as X; count(X) less than 1```
Azure Activity log alert for Delete SQL server firewall rule does not exist This policy identifies the Azure accounts in which activity log alert for Delete SQL server firewall rule does not exist. Creating an activity log alert for Delete SQL server firewall rule gives insight into SQL server firewall rule access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete server firewall rule (Microsoft.Sql/servers/firewallRules)' and Other fields you can set based on your custom settings.\n6. Click on Create.
```config from cloud.resource where api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( ( remote.cidr_block equals "0.0.0.0/0" or remote.name equals $.name ) and direction equals "inbound" )] exists as X; config from cloud.resource where api.name = 'ibm-vpc' as Y; filter ' $.X.id equals $.Y.default_security_group.id '; show X;```
IBM Cloud Default Security Group allow ingress rule from 0.0.0.0/0 This policy identifies IBM Cloud Default Security Groups which has ingress rules that allow traffic from 0.0.0.0/0. A VPC comes with a default security group whose initial configuration allows access from all members that are attached to this security group. If you do not specify a security group when you launch a Virtual Server, the Virtual Server is automatically assigned to this default security group. As a result, the Virtual Server will be having risk of uncontrolled connectivity. It is recommended that Default Security Group allows network ports, protocols, and services listening on a system with validated business needs that are running on each system. This is applicable to ibm cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' or 'Source' as Security Groups name\n6. Click on 'Delete'.
```config from cloud.resource where cloud.account = 'Bikram-Personal-AWS Account' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = versioningConfiguration.status contains "Off" ```
bikram-test-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.enablePurgeProtection is false```
Azure Key Vault Purge protection is not enabled This policy identifies Azure Key Vault which has Purge protection disabled. Enabling Azure Key Vault Purge protection feature prevents malicious deletion of a key vault which can lead to permanent data loss. It is recommended to enable Purge protection for Azure Key Vault which protects by enforcing a mandatory retention period for soft deleted key vaults. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Under 'Settings' select 'Properties'\n4. For 'Purge protection' click on 'Enable Purge protection (enforce a mandatory retention period for deleted vaults and vault objects)'\n5. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter "($.Y.bucketName==$.X.s3BucketName) and ($.Y.acl.grants[*].grantee contains AllUsers or $.Y.acl.grants[*].permission contains FullControl) and ($.Y.policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Action contains s3:* or $.Y.policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Action contains s3:*)" ; show Y;```
AWS S3 Bucket Policy allows public access to CloudTrail logs This policy scans your bucket policy that is applied to the S3 bucket to prevent public access to the CloudTrail logs. CloudTrail logs a record of every API call made in your AWS account. These logs file are stored in an S3 bucket. Bucket policy or the access control list (ACL) applied to the S3 bucket does not prevent public access to the CloudTrail logs. It is recommended that the bucket policy or access control list (ACL) applied to the S3 bucket that stores CloudTrail logs prevents public access. Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Goto S3\n3. Choose the reported S3 bucket and click Properties\n4. In the Properties pane, click the Permissions tab.\n5. If the Edit bucket policy button is present, select it.\n6. Remove any statement having an effect Set to 'Allow' and a principal set to '*'.\nNote: We recommend that you do not configure CloudTrail to write into an S3 bucket that resides in a different AWS account..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```
dnd_test_add_remove_child_policy_hyperion_policy_ss_finding_1 Description-e12f27fd-c82b-4362-8105-60994fe17eec This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'oci-block-storage-boot-volume' AND json.rule = lifecycleState equal ignore case "AVAILABLE" AND kmsKeyId is member of ("null")```
OCI boot volume is not encrypted with Customer Managed Key (CMK) This policy identifies OCI boot volumes that are not encrypted with a Customer Managed Key (CMK). Encrypting boot volumes with a CMK enhances data security by providing an additional layer of protection. Effective management of encryption keys is crucial for safeguarding and accessing sensitive data. Customers should review boot volumes encrypted with Oracle service managed keys to determine if they prefer managing keys for specific volumes and implement their own key lifecycle management accordingly. As best practice, it is recommended to encrypt OCI boot volumes using a Customer Managed Key (CMK) to strengthen data security measures. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the OCI Console.\n2. Switch to the Region of the reported resource from the Region drop-down in top-right corner.\n3. Type the reported boot volume name into the Search box at the top of the Console.\n4. Click on the reported boot volume from the search results.\n5. Next to "Encryption Key", click on "Assign".\n6. Choose the Vault Compartment, Vault, Master Encryption Key Compartment and Master Encryption Key.\n7. Click Assign..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals "0.0.0.0" and endIpAddress equals "0.0.0.0")] exists```
Copy of Azure SQL Server allow access to any Azure internal resources This policy identifies SQL Servers that are configured to allow access to any Azure internal resources. Firewall settings with start IP and end IP both with ‘0.0.0.0’ represents access to all Azure internal network. When this settings is enabled, SQL server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n3. Click on the reported SQL server\n4. Click on 'Networking' under Security\n5. Unselect 'Allow Azure services and resources to access this server' under Exceptions if selected.\n6. Remove any firewall rule which allows access to 0.0.0.0 in startIpAddress and endIpAddress if any.\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = logging.clusterLogging[*].types[*] all empty or logging.clusterLogging[*].enabled is false```
AWS EKS control plane logging disabled Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account. These logs make it easy for you to secure and run your clusters. You can select the exact log types you need, and logs are sent as log streams to a group for each Amazon EKS cluster in CloudWatch. This policy generates an alert if control plane logging is disabled. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable control plane logs:\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Logging, choose 'Manage logging'\n5. For each individual log type, choose Enabled\n6. Click on 'Save changes'.
```config from cloud.resource where api.name = 'aws-lambda-list-functions' as X; config from cloud.resource where api.name = 'aws-iam-list-roles' as Y; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action equals "*" or Action contains :* or Action[*] contains :*) and (Resource equals "*" or Resource[*] anyStartWith "*") and Condition does not exist)] exists as Z; filter '$.X.role equals $.Y.role.arn and $.Y.attachedPolicies[*].policyName equals $.Z.policyName'; show Z;```
AWS IAM policy attached to AWS Lambda execution role is overly permissive This policy identifies Lambda Functions execution role having overly permissive IAM policy attached to it. Lambda functions having overly permissive policy could lead to lateral movement in account or privilege being escalated when compromised. It is highly recommended to have the least privileged access policy to protect the Lambda Functions from unauthorized access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Refer to the following URL to give fine-grained and restrictive permissions to IAM Policy:\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html#edit-managed-policy-console.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isNumericCharactersRequired isFalse'```
OCI IAM password policy for local (non-federated) users does not have a number This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a number in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 NUMERIC CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL..
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-rds-describe-db-instances' AND json.rule = 'publiclyAccessible is true'```
AWS RDS database instance is publicly accessible This policy identifies RDS database instances which are publicly accessible. DB instances should not be publicly accessible to protect the integrity of data. Public accessibility of DB instances can be modified by turning on or off the Public accessibility parameter. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service.\n4. Select the RDS instance reported in the alert, Click on 'Modify' \n5. Under 'Network and Security', update the value of 'public accessibility' to 'No' and Click on 'Continue'\n6. Select required 'Scheduling of modifications' option and click on 'Modify DB Instance'.
```config from cloud.resource where cloud.type = 'aws' and api.name= 'aws-rds-db-cluster-snapshots' AND json.rule = dbclusterSnapshotAttributes[?any( attributeName equals restore and attributeValues[*] contains "all" )] exists```
AWS RDS Cluster snapshot is accessible to public This policy identifies AWS RDS Cluster snapshots which is accessible to public. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to setup and manage databases. If RDS Cluster snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service\n4. Click on 'Snapshots'\n5. Under 'Manual' tab select the reported RDS Cluster\n6. Click on 'Actions' and select 'Share snapshot'\n7. Under 'DB snapshot visibility' select 'Private'\n8. Click on 'Save'.
```config from cloud.resource where api.name = 'aws-ec2-ebs-encryption' AND cloud.region IN ( 'AWS Ohio' ) AND json.rule = ebsEncryptionByDefault is false```
Roman - AWS EBS volume region with encryption is disabled - Revised for This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-method' AND json.rule = authorizationType contains NONE```
AWS API gateway request authorisation is not set This policy identifies AWS API Gateways of protocol type REST for which the request authorisation is not set. The method request for API gateways takes the client input that is passed to the back end through the integration request. It is recommended to add authorization type to each of the method to add a layer of protection. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS management console\n2. Navigate to 'API Gateway' service\n3. Select the region for which the API gateway is reported.\n4. Find the alerted API by the API gateway ID which is the first part of reported resource and click on it\n5. Navigate to the reported method\n6. Click on the clickable link of 'Method Request'\n7. Under section 'Settings', click on the pencil symbol for 'Authorization' field\n8. From the dropdown, Select the type of Authorization as per the requirement \n9. Click on the tick symbol next to it to save the changes.
```config from cloud.resource where api.name = 'azure-dns-recordsets' AND json.rule = type contains CNAME and properties.CNAMERecord.cname contains "azurewebsites.net" as X; config from cloud.resource where api.name = 'azure-app-service' as Y; filter 'not ($.Y.properties.hostNames contains $.X.properties.CNAMERecord.cname) '; show X;```
Azure DNS Zone having dangling DNS Record vulnerable to subdomain takeover associated with Web App Service This policy identifies DNS records within an Azure DNS zone that point to Azure Web App Services that no longer exist. A dangling DNS attack happens when a DNS record points to a cloud resource that has been deleted or is inactive, making the subdomain vulnerable to takeover. An attacker can exploit this by creating a new resource with the same name and taking control of the subdomain to serve malicious content. This allows attackers to host harmful content under your subdomain, which could lead to phishing attacks, data breaches, and damage to your reputation. The risk arises because the DNS record still references a non-existent resource, which unauthorized individuals can re-associate with their own resources. As a security best practice, it is recommended to routinely audit DNS zones and remove or update DNS records pointing to non-existing Web App Services. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal and search for 'DNS zones'\n2. Select 'DNS zones' from the search results\n3. Select the DNS zone associated with the reported DNS record\n4. On the left-hand menu, under 'DNS Management,' select 'Recordsets'\n5. Locate and select the reported DNS record\n6. Update or remove the DNS Record if no longer necessary.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and (masterAuth.clientKey exists or masterAuth.clientCertificate exists)```
GCP Kubernetes Engine Cluster Client Certificate is not disabled This policy identifies Kubernetes Engine clusters that have enabled Client Certificate authentication. A client certificate is a base64-encoded public certificate used by clients to authenticate to the cluster endpoint. GKE manages authentication via gcloud using the OpenID Connect token method, setting up the Kubernetes configuration, getting an access token, and keeping it up to date. So it is recommended not to enable Client Certificate authentication, to avoid additional management overhead of key management and rotation. Note: For GKE Autopilot clusters, legacy authentication methods cannot be used. Basic authentication is deprecated and has been removed in GKE 1.19 and later. Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication#legacy-auth This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP Kubernetes Clusters Client Certificate can be disabled only at the time of the creation of clusters. So to fix this alert, create a new cluster with Client Certificate disabled and then migrate all required cluster data or containers from the reported cluster to this new cluster.\n\nTo create the cluster with Client Certificate disabled, perform the following steps:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Click on 'Clusters' (Left Panel)\n4. On page 'Kubernetes clusters', click on 'CREATE'\n5. Select the type of cluster by clicking on the 'CONFIGURE' button\n6. Select ‘Security’ tab (Left Panel)\n7. Under the 'Legacy security options' section, ensure 'Issue a client certificate' is not set\n8. Provide all required cluster data or containers from the reported cluster to this new cluster\n9. Click on 'CREATE' to create a new cluster\n10. Once the cluster is created, delete the alerted cluster to resolve the alert.
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-file-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```
Azure Storage account diagnostic setting for file is disabled This policy identifies Azure Storage account files that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account files. These logs provide valuable insights into the operations, performance, and security of the storage account files. As a best practice, it is recommended to enable diagnostic logs on all storage account files. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the file resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'.
```config from cloud.resource where cloud.accountgroup = 'Flowlog-sol' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "sol-test" ```
Copy of Sol-test config policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = objectEventsEnabled is false```
OCI Object Storage bucket does not emit object events This policy identifies the OCI Object Storage buckets that are disabled with object events emission. Monitoring and alerting on object events of bucket objects will help in identifying changes bucket objects. It is recommended that buckets should be enabled to emit object events. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Next to Emit Object Events, click Edit.\n5. In the dialog box, select EMIT OBJECT EVENTS (to enable).\n6. Click Save Changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals does not equal Microsoft.Network/publicIPAddresses/delete and properties.condition.allOf[?(@.field=='category')].['equals'] contains Administrative" as X; count(X) less than 1```
Azure Activity Log Alert does not exist for Delete Public IP Address rule This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' as Y; filter '$.X.description.securityGroups[*] contains $.Y.groupId and $.Y.ipPermissions[*] is empty'; show X;```
AWS Elastic Load Balancer (ELB) has security group with no inbound rules This policy identifies Elastic Load Balancers (ELB) which have security group with no inbound rules. A security group with no inbound rule will deny all incoming requests. ELB security groups should have at least one inbound rule, ELB with no inbound permissions will deny all traffic incoming to ELB; in other words, the ELB is useless without inbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on the security group, it will open Security Group properties in a new tab in your browser\n6. Click on the 'Inbound Rules'\n7. If there are no rules, click on 'Edit rules', add an inbound rule according to your ELB functional requirement\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case "PowerState/running" and ['properties.securityProfile'].['securityType'] equal ignore case "TrustedLaunch" and ['properties.securityProfile'].['uefiSettings'].['vTpmEnabled'] is false```
Azure Virtual Machine vTPM feature is disabled This policy identifies Virtual Machines that have Virtual Trusted Platform Module (vTPM) feature disabled. Virtual Trusted Platform Module (vTPM) provide enhanced security to the guest operating system. It is recommended to enable virtual TPM device on supported virtual machines to facilitate measured Boot and other OS security features that require a TPM. NOTE: This assessment only applies to trusted launch enabled virtual machines. You can't enable trusted launch on existing virtual machines that were initially created without it. To know more, refer https://docs.microsoft.com/azure/virtual-machines/trusted-launch?WT.mc_id=Portal-Microsoft_Azure_Security This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Select 'Configuration' under 'Settings' from left panel \nNOTE: Enabling vTPM will trigger an immediate SYSTEM REBOOT.\n5. On the 'Configuration' page, check 'vTPM' under 'Security type' section\n6. Click 'Save'.
```config from cloud.resource where api.name = 'gcloud-compute-target-ssl-proxy' as X; config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' as Y; filter "$.X.sslPolicy does not exist or ($.Y.profile equals COMPATIBLE and $.Y.selfLink contains $.X.sslPolicy) or ( ($.Y.profile equals MODERN or $.Y.profile equals CUSTOM) and $.Y.minTlsVersion does not equal TLS_1_2 and $.Y.selfLink contains $.X.sslPolicy ) or ( $.Y.profile equals CUSTOM and ( $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_GCM_SHA256 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_GCM_SHA384 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_3DES_EDE_CBC_SHA ) and $.Y.selfLink contains $.X.sslPolicy ) "; show X;```
GCP Load Balancer SSL proxy permits SSL policies with weak cipher suites This policy identifies GCP SSL Load Balancers that permit SSL policies with weak cipher suites. GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites. To prevent usage of insecure features, SSL policies should use at least TLS 1.2 with the MODERN profile; or the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the target SSL Proxy Load Balancer does not have any SSL policy configured, updating the proxy with either a new or an existing secured SSL policy is recommended.\n\nThe 'GCP default' SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the broadest range of insecure cipher suites and is not modifiable. If this SSL policy is attached to the target SSL Proxy Load Balancer, updating the proxy with a more secured SSL policy is recommended.\n\nTo create a new SSL policy, refer to the following URL:\nhttps://cloud.google.com/load-balancing/docs/use-ssl-policies#creating_ssl_policies\n\nTo modify the existing insecure SSL policy attached to the Target SSL Proxy:\n1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of page to view target proxies\n5. Go to 'TARGET PROXIES' tab and Click on the reported SSL target proxy\n6. Note the 'Backend service' name.\n7. Click on the hyperlink under 'In use by'\n8. Note the 'External IP address'\n9. Select Load Balancing (Left Panel) and click on the SSL load balancer with same name as previously noted 'Backend service' name.\n10. In frontend section, consider the rule where 'IP:Port' matches the previously noted 'External IP address'.\n11. Click on the 'SSL Policy' of the rule. This will take you to the alert causing SSL policy.\n12. Click on 'EDIT'\n13. Set 'Minimum TLS Version' to TLS 1.2 and set 'Profile' to Modern or Restricted.\n14. Alternatively, if you use the profile 'Custom', make sure that the following features are disabled:\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n15. Click on 'Save'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.minimumPasswordLength less than 14'```
OCI IAM password policy for local (non-federated) users does not have minimum 14 characters This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a minimum of 14 characters in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Type the number in range 14-100 into the box below the text: MINIMUM PASSWORD LENGTH (IN CHARACTERS).\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(20,20) or destinationPortRanges[*] contains _Port.inRange(20,20) ))] exists```
Azure Network Security Group allows all traffic on FTP-Data (TCP Port 20) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on FTP-Data (TCP Port 20). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict FTP-Data solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ( $.X.filter contains "resource.type =" or $.X.filter contains "resource.type=" ) and ( $.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=" ) and $.X.filter contains "gce_route" and ( $.X.filter contains "protoPayload.methodName=" or $.X.filter contains "protoPayload.methodName =" ) and ( $.X.filter does not contain "protoPayload.methodName!=" and $.X.filter does not contain "protoPayload.methodName !=" ) and $.X.filter contains "beta.compute.routes.patch" and $.X.filter contains "beta.compute.routes.insert"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for VPC network route patch and insert This policy identifies GCP accounts which do not have a log metric filter and alert for VPC network route patch and insert events. Monitoring network routes patching and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the patch and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gce_route" AND protoPayload.methodName="beta.compute.routes.patch" OR protoPayload.methodName="beta.compute.routes.insert"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equals "Microsoft.Storage"```
Azure Storage account Encryption Customer Managed Keys Disabled This policy identifies Azure Storage account which has Encryption with Customer Managed Keys Disabled. By default all data at rest in Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts for better control on Storage account data. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and autoMinorVersionUpgrade is false and engine does not contain docdb and engine does not contain neptune```
AWS RDS minor upgrades not enabled When Amazon Relational Database Service (Amazon RDS) supports a new version of a database engine, you can upgrade your DB instances to the new version. There are two kinds of upgrades: major version upgrades and minor version upgrades. Minor upgrades helps maintain a secure and stable RDS with minimal impact on the application. For this reason, we recommend that your automatic minor upgrade is enabled. Minor version upgrades only occur automatically if a minor upgrade replaces an unsafe version, such as a minor upgrade that contains bug fixes for a previous version. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enable RDS auto minor version upgrades.\n\n1. Go to the AWS console RDS dashboard.\n2. In the navigation pane, choose Instances.\n3. Select the database instance you wish to configure.\n4. From the 'Instance actions' menu, select Modify.\n5. Under the Maintenance section, choose Yes for Auto minor version upgrade.\n6. Select Continue and then Modify DB Instance..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-events-eventbus' AND json.rule = Policy does not exist```
AWS EventBridge event bus with no resource-based policy attached This policy identifies AWS EventBridge event buses with no resource-based policy attached. AWS EventBridge is a serverless event bus service that enables businesses to quickly and easily integrate applications, services, and data across multiple cloud environments. By default, an EventBridge custom event bus lacks a resource-based policy associated with it, which allows principals in the account to access the event bus.  It is recommended to attach a resource based policy to the event bus to limit access scope to fewer entities. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To attach a resource based policy to the event bus, please follow the below steps:\n\n1. Log into the AWS console and navigate to the EventBridge dashboard\n2. In the left navigation pane, choose 'Event buses'\n3. Select the event bus reported\n4. Under the 'Permissions' tab, click on 'Manage permissions'\n5. Add the resource based policy JSON with permissions to grant on the event bus\n6. Click on 'Update'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case "/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace"```
bboiko test 03 - policy This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'azure-event-hub-namespace' AND json.rule = properties.disableLocalAuth is false as X; config from cloud.resource where api.name = 'azure-event-hub' AND json.rule = properties.status equal ignore case ACTIVE and authorizationRules[*] is empty as Y; filter '$.Y.id contains $.X.name'; show Y;```
Azure Event Hub Instance not defined with authorization rule This policy identifies Azure Event Hub Instances that are not defined with authorization rules. If the Azure Event Hub Instance authorization rule is not defined, there is a heightened risk of unauthorized access to the event hub data and resources. This could potentially lead to unauthorized data retrieval, tampering, or disruption of the event hub operations. Defining proper authorization rules helps mitigate these risks by controlling and restricting access to the event hub resources. As a best practice, it is recommended to define the least privilege security model access policies at Event Hub Instance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Event Hubs'\n3. Select the Event Hubs Namespace from the list which has the reported Event Hub instance.\n4. Click on 'Event Hubs' under 'Entities' section\n5. Click on the reported Event Hub instance\n6. Select 'Shared access policies' under 'Settings' section\n7. Click on '+Add'\n8. Enter 'Policy name' and the required access\n9. Click on 'Create'.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "containers-kubernetes" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance","namespace"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud user with IAM policies provide administrative privileges for Kubernetes Service This policy identifies IBM Cloud users with overly permissive Kubernetes Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Kubernetes Service'.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = 'iam.bindings[*] size greater than 0 and iam.bindings[*].members[*] any equal allAuthenticatedUsers'```
GCP Storage buckets are publicly accessible to all authenticated users This policy identifies the buckets which are publicly accessible to all authenticated users. Enabling public access to Storage Buckets enables anybody with a web association to access sensitive information that is critical to business. Access over a whole bucket is controlled by IAM. Access to individual objects within the bucket is controlled by its ACLs. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Storage (Left Panel)\n3. Click Browse\n4. Choose the identified Storage bucket whose ACL needs to be modified\n5. Click on SHOW INFO PANEL button\n6. Check all the ACL groups and make sure that the none of them are set to 'allAuthenticatedUsers'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc' AND json.rule = classic_access is true```
IBM Cloud Virtual Private Cloud (VPC) classic access is enabled This policy identifies IBM Virtual Private Cloud where access to classic resources are enabled. If the classic access is enabled one can access & communicate IBM Cloud classic infrastructure & network from the VPC. Classic access should be disabled initially. This is applicable to ibm cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Refer to https://cloud.ibm.com/docs/vpc?topic=vpc-deleting-vpc-resources&interface=ui to safely delete the affected VPC. Note- A VPC must be set up for classic access when it is created & it cannot be updated to add or remove classic access..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vnet-list' AND json.rule = ['properties.provisioningState'] equals Succeeded and (['properties.ddosProtectionPlan'].['id'] does not exist or ['properties.enableDdosProtection'] is false)```
Azure Virtual network not protected by DDoS Protection Standard This policy identifies Virtual networks not protected by DDoS Protection Standard. Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns exhausting an application's resources, making the application unavailable to legitimate users. Azure DDoS Protection Standard provides enhanced DDoS mitigation features to defend against DDoS attacks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Virtual networks dashboard \n3. Click on the reported Virtual network\n4. Under the 'Settings', click on 'DDoS protection'\nNOTE: Before enabling DDoS Protection, If already no DDoS protection plan exist you need to configure one DDoS protection plan for your organization by following below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection#create-a-ddos-protection-plan\n5. Select 'Enable' for 'DDoS Protection Standard' and choose 'DDoS protection plan' from dropdown or enter DDoS protection plan resource ID.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine contains redis and autoMinorVersionUpgrade is false```
AWS ElastiCache Redis cluster automatic version upgrade disabled This policy identifies the ElastiCache Redis clusters that do not have the auto minor version upgrade feature enabled. An ElastiCache Redis cluster is a fully managed in-memory data store used to cache frequently accessed data, reducing latency and improving application performance. Failure to enable automatic minor upgrades can leave your cache clusters vulnerable to security risks stemming from outdated software. It is recommended to enable automatic minor version upgrades on ElastiCache Redis clusters to receive timely patches and updates, reduce the risk of security vulnerabilities, and improve overall performance and stability. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console. Navigate to the ElastiCache Dashboard\n2. Click on 'Redis caches' under the 'Resources' section\n3. Select the reported Redis cluster\n4. Click on the 'Modify' button\n5. In the 'Modify' page, under the 'Maintenance' section\n6. Find the 'Auto upgrade minor versions' setting and click on 'Enable'\n7. Click on 'Preview changes'. Under 'Apply immediately', select 'Yes'\n8. Click on 'Modify'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = logConfig.fieldLogLevel is not member of ('ERROR','ALL')```
AWS AppSync has field-level logging disabled This policy identifies an AWS AppSync GraphQL API not configured with field-level logging with either 'ERROR' or 'ALL'. AWS AppSync is a managed GraphQL service that simplifies the development of scalable APIs. Field-level logging in AWS AppSync lets you capture detailed logs for specific fields in your GraphQL API. Without enabling field-level logging, the security monitoring and debugging capabilities may be compromised, increasing the risk of undetected threats and vulnerabilities. It is recommended to enable field-level logging to ensure granular visibility into API requests, aiding in security, and compliance with regulatory requirements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To turn on field-level logging on an AWS AppSync GraphQL API,\n\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. In the navigation pane, choose 'AWS AppSync' under the 'Front-end Web & Mobile' section.\n4. On the APIs page, choose the name of a reported GraphQL API.\n5. On your API's homepage, in the navigation pane, choose Settings.\n6. Under Logging, Turn on Enable Logs.\n7. Under Field resolver log level, choose your preferred field-level logging level Error or All according to your business requirements.\n8. Under Create or use an existing role, choose New role to create a new AWS Identity and Access Management (IAM) that allows AWS AppSync to write logs to CloudWatch. Or, choose the Existing role to select the Amazon Resource Name (ARN) of an existing IAM role in your AWS account.\n9. Choose Save..
```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-global-web-acl-resource' AND json.rule =(webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webACLId'; show X;```
AWS CloudFront attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS CloudFront attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, CloudFront attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. Note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'.
```config from cloud.resource where cloud.type = 'aws' AND cloud.service = 'Amazon EC2' AND api.name = 'aws-ec2-describe-instances' AND json.rule = securityGroups[*].groupName equals "default" as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = groupName equals "default" as Y; filter '$.X.securityGroups[*].groupId equals $.Y.groupId';show Y;```
Naveed instance-with-default-security-group This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-run-services-list' AND json.rule = status.conditions[?any(type equals Ready and status equals True)] exists and status.conditions[?any(type equals RoutesReady and status equals True)] exists and iamPolicy.bindings[?any(role equals roles/run.invoker and members is member of (allUsers, allAuthenticatedUsers))] exists```
GCP Cloud Run service is publicly accessible This policy identifies GCP Cloud Run services that are publicly accessible. Granting Cloud Run Invoker permission to 'allUsers' or 'allAuthenticatedUsers' allows anyone to access the Cloud Run service over internet. Such access might not be desirable if sensitive data is stored at the location. As security best practice it is recommended to remove public access and assign the least privileges to the GCP Cloud Run service according to requirements. Note: For public API/website Cloud Run service will permit 'Cloud Run Invoker' to 'allUsers'. Refer to the following link for common use cases of authentication to the Cloud Run service. Link: https://cloud.google.com/run/docs/authenticating/overview This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', refer to the following URL:\nhttps://cloud.google.com/run/docs/securing/managing-access#remove-principals.
```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = eventSelectors[?any( dataResources[?any( type contains "AWS::S3::Object" and values contains "arn:aws:s3")] exists and readWriteType is member of ("All","Writeonly") and includeManagementEvents is true)] exists as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1```
AWS S3 Buckets with Object-level logging for write events not enabled This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_duration or settings.databaseFlags[?any(name contains log_duration and value contains off)] exists)"```
GCP PostgreSQL instance database flag log_duration is not set to on This policy identifies PostgreSQL database instances in which database flag log_duration is not set to on. Enabling the log_duration setting causes the duration of each completed statement to be logged. Monitoring the time taken to execute the queries can be crucial in identifying any resource-hogging queries and assessing the performance of the server. Further steps such as load balancing and the use of optimized queries can be taken to ensure the performance and stability of the server. It is recommended to set log_duration as on. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_duration' from the drop-down menu and set the value as 'on'\nOR\nIf the flag has been set to other than on, Under 'Customize your instance', In 'Flags' section choose the flag 'log_duration' and set the value as 'on'\n6. Click on 'DONE' and then 'SAVE'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case kubernetes and state equal ignore case "normal" and features.keyProtectEnabled is false```
IBM Cloud Kubernetes secrets data is not encrypted with bring your own key This policy identifies IBM Cloud kubernetes clusters for which secrets data have encryption using key protect disabled. Kubernetes Secret data is encoded in the base64 format and stored as plain text in etcd. Etcd is a key-value store used as a backing store for Kubernetes cluster state and configuration data. Storing Secrets as plain text in etcd is risky, as they can be easily compromised by attackers and used to access systems. It is recommended that secrets data is encrypted for better security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to your IBM Cloud console.\n2. To view the list of services that are available on IBM Cloud, click 'Catalog'.\n3. From the 'All Categories' navigation pane, click the 'Security' category.\n4. From the list of services, click the Key Protect tile.\n5. Select a service plan, and click Create to provision an instance of Key Protect in the\naccount, region, and resource group where you are logged in.\n6. To view a list of your resources, go to 'Menu > Resource List'.\n7. From your IBM Cloud resource list, select your provisioned instance of Key Protect.\n8. To create a new key, click 'Add +' and select the 'Create a key' window. Specify the\nkey's name and key type.\n9. When you are finished filling out the key's details, click 'Add key' to confirm.\n10. From the Clusters console, select the cluster that you want to enable encryption for.\n11. From the 'Overview' tab, in the 'Integrations > Key management service' section, click\n'Enable'.\n12. Select the 'Key management service instance' and 'Root key' that you want to use\nfor the encryption.\n13. Click 'Enable'.\n14. Verify that the KMS enablement process is finished. From the 'Summary > Master\nstatus' section, you can check the progress.\n15. After the KMS provider is enabled in the cluster, data and new secrets that\nare created in the cluster are automatically encrypted by using your root key..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" and ( gceSetup.metadata.proxy-mode equals "mail" or gceSetup.metadata.proxy-user-mail exists )```
GCP Vertex AI Workbench Instance JupyterLab interface access mode set to single user This policy identifies GCP Vertex AI Workbench Instances with JupyterLab interface access mode set to single user. Vertex AI Workbench Instance can be accessed using the web-based JupyterLab interface. Access mode controls the control access to this interface. Allowing access to only a single user could limit collaboration, increase chances of credential sharing, and hinder security audits and reviews of the resource. It is recommended to avoid single user access and make use of the service account access mode for workbench instances. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Access mode cannot be changed for existing Vertex AI Workbench Instances. A new Vertex AI Workbench instance should be created.\n\nTo create a new Vertex AI Workbench instance with access mode set to service account, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Select 'INSTANCES' tab\n5. Click 'CREATE NEW'\n6. Click 'ADVANCED OPTIONS'\n7. Configure the instance as required\n8. Go to 'IAM and security' tab\n9. Select 'Service account'\n10. Click 'CREATE'.
```config from cloud.resource where api.name = 'aws-elasticache-cache-clusters' as X; config from cloud.resource where api.name = 'aws-cache-engine-versions' as Y; filter 'not( $.X.engine equals $.Y.engine and $.Y.cacheEngineVersionDescription contains $.X.engineVersion)'; show X;```
AWS ElastiCache cluster not using supported engine version This policy identifies AWS Elastic Redis or Memcache cluster not using the supported engine version. AWS ElastiCache simplifies deploying, operating, and scaling Redis and Memcached in-memory caches in the cloud. An ElastiCache cluster not using a supported engine version runs on outdated Redis or Memcached versions. These versions may be end-of-life (EOL) or lack current updates and patches from AWS. This exposes the cluster to unpatched vulnerabilities, compliance risks, and potential service instability. It is recommended to regularly update your ElastiCache clusters to the latest supported engine versions as recommended by AWS. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To Upgrade the AWS ElastiCache cluster perform the following actions:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on 'Redis caches' under the 'Resources' section\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box, Under the 'Cluster settings' section \n8. Select 'Engine version' from the drop down according to your requirements.\n9. select 'Parameter groups' family that is compatible with the new engine version.\n10. Click on 'Preview Changes'\n11. Select Yes checkbox under 'Apply Immediately' , to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\n12. Click on 'Modify'.
```config from cloud.resource where api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running as X; config from cloud.resource where api.name = 'azure-spring-cloud-app' AND json.rule = properties.provisioningState equals Succeeded and identity does not exist as Y; filter '$.X.name equals $.Y.serviceName'; show Y;```
Azure Spring Cloud App system-assigned managed identity is disabled This policy identifies Azure Spring Cloud apps in which system-assigned managed identity is disabled. System-assigned managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the system-assigned managed identity to your Spring Cloud apps. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable system-assigned managed identity on an existing Azure Spring Cloud app, follow the below URL:\nhttps://docs.microsoft.com/en-in/azure/spring-cloud/how-to-enable-system-assigned-managed-identity.
```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false and resourceRecordSet[?any( type equals CNAME and resourceRecords[*].value contains s3-website )] exists as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not ($.X.resourceRecordSet[*].name intersects $.Y.bucketName)'; show X;```
AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk associated with AWS S3 Bucket This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk associated with AWS S3 Bucket. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing S3 bucket will have a risk of these dangling domain entries being taken over by an attacker by creating a similar S3 bucket in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Identify DNS record entry pointing to a non-existing S3 bucket resource.\n\nTo remove DNS record entry, follow steps given in following URL:\nhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-network-acls' AND json.rule = "entries[?any(egress equals false and ((protocol equals 6 and ((portRange.to equals 22 or portRange.to equals 3389 or portRange.from equals 22 or portRange.from equals 3389) or (portRange.to > 22 and portRange.from < 22) or (portRange.to > 3389 and portRange.from < 3389))) or protocol equals -1) and (cidrBlock equals 0.0.0.0/0 or ipv6CidrBlock equals ::/0) and ruleAction equals allow)] exists"```
AWS Network ACLs that allow ingress from 0.0.0.0/0 to remote server administration ports This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'websiteConfiguration exists'```
AWS S3 buckets with configurations set to host websites This policy identifies AWS S3 buckets that are configured to host websites. To host a website on AWS S3 you should configure a bucket as a website. By frequently surveying these S3 buckets, you can ensure that only authorized buckets are enabled to host websites. Make sure to disable static website hosting for unauthorized S3 buckets. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console\n2. Goto S3 under Services\n3. Choose the reported bucket\n4. Goto Properties tab\n5. Click on Static website hosting\n6. Click on Disable website hosting\n7. Click on Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```
test perf of AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```
BikramTest-AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'attributes.accessLog.enabled is false'```
AWS Elastic Load Balancer (Classic) with access log disabled This policy identifies Classic Elastic Load Balancers which have access log disabled. When Access log enabled, Classic load balancer captures detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable access logging for Elastic Load Balancer (Classic), follow below mentioned URL:\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html.
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is false)'; show X;```
AWS EMR cluster is not enabled with data encryption at rest This policy identifies AWS EMR clusters for which data encryption at rest is not enabled. Encryption of data at rest is required to prevent unauthorized users from accessing the sensitive information available on your EMR clusters and associated storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. For encryption At Rest select the required encryption type ('S3 encryption'/'Local disk encryption'/both) and follow below link for enabling the same.\n8. https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'..
```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = (profile equals MODERN or profile equals CUSTOM) and minTlsVersion does not equal "TLS_1_2" as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter "$.X.selfLink contains $.Y.sslPolicy"; show Y;```
GCP HTTPS Load balancer is configured with SSL policy having TLS version 1.1 or lower This policy identifies HTTPS Load balancers is configured with SSL policy having TLS version 1.1 or lower. As a best security practice, use TLS 1.2 as the minimum TLS version in your load balancers SSL security policies. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of the page to view target proxies\n5. Click on 'TARGET PROXIES' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'Load balancer'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select SSL policy that uses TLS 1.2 version or create a new SSL policy with TLS 1.2 as Minimum TLS version from the dropdown for 'SSL policy'\n11. Click on 'DONE'\n12. Click on 'UPDATE'.
```config from cloud.resource where api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and kmsKeyId exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;```
AWS SageMaker notebook instance not encrypted using Customer Managed Key This policy identifies SageMaker notebook instances that are not encrypted using Customer Managed Key. SageMaker notebook instances should be encrypted with Amazon KMS Customer Master Keys (CMKs) instead of AWS managed-keys in order to have more granular control over the data-at-rest encryption/decryption process and meet compliance requirements. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance encryption can not be modified once it is created. You need to create a new notebook instance with encryption using a custom KMS key; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Permissions and encryption' section,\nFrom the 'Encryption key - optional' dropdown list, choose a custom KMS key for the new SageMaker notebook instance.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete <notebook-instance-name> dialog box, click the Delete button to confirm the action..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = ((purpose does not equal ENCRYPT_DECRYPT) or (purpose equals ENCRYPT_DECRYPT and primary.state equals ENABLED)) and iamPolicy.bindings[*].members contains allUsers or iamPolicy.bindings[*].members contains allAuthenticatedUsers```
GCP KMS crypto key is anonymously accessible This policy identifies GCP KMS crypto keys that are anonymously accessible. Granting permissions to 'allUsers' or 'allAuthenticatedUsers' allows anyone to access the KMS key. As a security best practice, it is recommended not to bind such members to KMS IAM policy. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Granting/revoking access for the KMS key is only supported by CLI. To remediate run the below CLI command. \n\n1. List all the cryptokeys which has overly permissive IAM bindings,\n\ngcloud asset search-all-iam-policies --asset-types=cloudkms.googleapis.com/CryptoKey --query="policy:(allUsers OR allAuthenticatedUsers)" \n\n2. Remove IAM policy binding for a KMS key to remove access to allUsers and allAuthenticatedUsers using the below command.\n\ngcloud kms keys remove-iam-policy-binding [key_name] --keyring='[key_ring_name]' --location='[location]' --member='[allUsers/allAuthenticatedUsers]' --role='[role]'\n\nRefer to the following URL for more information on “remove-iam-policy-binding” command.\nhttps://cloud.google.com/sdk/gcloud/reference/projects/remove-iam-policy-binding.
```config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = parameters.log_min_duration_statement.ParameterValue does not exist or parameters.log_min_duration_statement.ParameterValue equals -1 as X; config from cloud.resource where api.name= 'aws-rds-db-cluster' AND json.rule = status contains available and engine contains postgres as Y; filter '$.X.DBClusterParameterGroupName equals $.Y.dbclusterParameterGroup'; show Y;```
AWS RDS Postgres Cluster does not have query logging enabled This policy identifies RDS Postgres clusters with query logging disabled. In AWS RDS PostgreSQL, by default, the logging level captures login failures, fatal server errors, deadlocks, and query failures. To log data changes, we recommend enabling cluster logging for monitoring and troubleshooting. To obtain adequate logs, an RDS cluster should have log_statement and log_min_duration_statement parameters configured. It is a best practice to enable additional RDS cluster logging, which will help in data change monitoring and troubleshooting. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify the custom DB cluster parameter group to enable query logging, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Parameter groups'.\n3. In the list, choose the above-created parameter group that you want to modify.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the value of the 'log_min_duration_statement parameter' to any value other than -1 you want to modify.\n6. Change the value of 'log_statement' according to the requirements.\n7. Choose 'Save Changes'.\n8. Reboot the primary (writer) DB instance in the cluster to apply the changes to it.\n9. Then reboot the reader DB instances to apply the changes to them.\n\nPlease create a custom parameter group if the cluster has only the default parameter group using the following steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Parameter groups'.\n3. Choose 'Create parameter group'. The Create parameter group window appears.\n4. In the Parameter group family list, select a 'DB parameter group family'.\n5. In the Type list, select 'DB cluster parameter group'.\n6. In the Group name box, enter the name of the new DB cluster parameter group.\n7. In the Description box, enter a description for the new DB cluster parameter group.\n8. Choose 'Create'.\n\nTo modify an RDS cluster to use the custom parameter group, follow the below steps:\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console.\n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify.\n3. Choose 'Modify'. The Modify DB instance page appears.\n4. Under 'Additional Configuration', select the above-created cluster parameter group from the DB parameter group dropdown.\n5. Choose 'Continue' and check the summary of modifications.\n6. (Optional) Choose 'Apply immediately' to apply the changes immediately. Choosing this option can cause downtime in some cases.\n7. On the confirmation page, review your changes. If they are correct, choose 'Modify DB instance' to save your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and config.siteAuthEnabled equals false'```
Azure App Service Web app authentication is off Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app. If an anonymous request is received from a browser, App Service will redirect to a logon page. To handle the logon process, a choice from a set of identity providers can be made, or a custom authentication mechanism can be implemented. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under the Setting section, Click on 'Authentication / Authorization'\n a. In case the Identity Provider is not configured: https://learn.microsoft.com/en-gb/azure/app-service/overview-authentication-authorization#identity-providers \n b. In case the identity Provider is configured and disabled:\n i. Edit Authentication Settings\n ii. Set 'App Service Authentication' to 'Enabled'\n iii. Choose other parameters as per your requirement and Click on 'Save'.
```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```
tbsjmfcdgf_ui_auto_policies_tests_name rjyyqylxvc_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-ec2-describe-snapshots' AND json.rule='createVolumePermissions[*].group contains all' ```
PCSUP-22910-Policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id is member of ("crn:v1:bluemix:public:iam::::role:Administrator","crn:v1:bluemix:public:iam::::serviceRole:Manager") )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "cloud-object-storage" and operator is member of ("stringEquals", "stringMatch"))] exists and (attributes[?any( name is member of ("resource","resourceGroupId","serviceInstance","prefix"))] does not exist or attributes[?any( name equal ignore case "resourceType" and value equal ignore case "bucket" )] exists ) )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Cloud object storage buckets This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for cloud object storage service. IBM Cloud Object Storage is a highly scalable, resilient, and secure managed data storage service on the IBM Cloud platform that offers an alternative to traditional block and file storage solutions. When a Service ID having a policy with admin rights on object storage gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID that is reported and that you want to edit access to.\n4. Under the 'Access' tab, go to the 'Access policies' section and click on the three dots on the right corner of a row for the policy that has administrator permission on the 'IBM Cloud Object Storage' service.\n5. Click on Remove or Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to edit or remove, and confirm by clicking Save or Remove..
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.createdrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletedrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatedrg and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createdrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletedrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatedrgattachment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.changeinternetgatewaycompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deleteinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updateinternetgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.changelocalpeeringgatewaycompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createlocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletelocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatelocalpeeringgateway and condition.eventType[*] contains com.oraclecloud.natgateway.changenatgatewaycompartment and condition.eventType[*] contains com.oraclecloud.natgateway.createnatgateway and condition.eventType[*] contains com.oraclecloud.natgateway.deletenatgateway and condition.eventType[*] contains com.oraclecloud.natgateway.updatenatgateway and condition.eventType[*] contains com.oraclecloud.servicegateway.attachserviceid and condition.eventType[*] contains com.oraclecloud.servicegateway.changeservicegatewaycompartment and condition.eventType[*] contains com.oraclecloud.servicegateway.createservicegateway and condition.eventType[*] contains com.oraclecloud.servicegateway.deleteservicegateway.begin and condition.eventType[*] contains com.oraclecloud.servicegateway.deleteservicegateway.end and condition.eventType[*] contains com.oraclecloud.servicegateway.detachserviceid and condition.eventType[*] contains com.oraclecloud.servicegateway.updateservicegateway ) and actions.actions[*].topicId exists' as X; count(X) less than 1```
OCI Event Rule and Notification does not exist for network gateways changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Network Gateways changes. This policy includes Internet Gateways, Dynamic Routing Gateways, Service Gateways, Local Peering Gateways, and NAT Gateways. Monitoring and alerting on changes to Network Gateways will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Network Gateways. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting DRG – Create, DRG – Delete, DRG – Update, DRG Attachment – Create, DRG Attachment – Delete, DRG Attachment – Update, Internet Gateway – Create, Internet Gateway – Delete, Internet Gateway – Update, Internet Gateway – Change Compartment, Local Peering Gateway – Create, Local Peering Gateway – Delete, Local Peering Gateway – Update, Local Peering Gateway – Change Compartment, NAT Gateway – Create, NAT Gateway – Delete, NAT Gateway – Update, NAT Gateway – Change Compartment, Service Gateway – Create, Service Gateway – Delete Begin, Service Gateway – Delete End, Service Gateway – Update, Service Gateway – Attach Service, Service Gateway – Detach Service, Service Gateway – Change Compartment\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = variable[?any(properties.isEncrypted is false)] exists```
Azure Automation account variables are not encrypted This policy identifies Automation accounts variables that are not encrypted. Variable assets are values that are available to all runbooks and DSC configurations in your Automation account. When a variable is created, you can specify that it be stored encrypted. Azure Automation stores each encrypted variable securely. It is recommended to enable encryption of Automation account variable assets when storing sensitive data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Automation Accounts'\n3. Click on the reported Automation Account\n4. Select 'Variables' under 'Shared Resources' from left panel \nNOTE: If you have Automation account variables storing sensitive data that are not already encrypted, then you will need to delete them and recreate them as encrypted variables.\n5. Delete the unencrypted variables and recreate them by setting the option 'Encrypted' as 'Yes'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-document' AND json.rule = accountSharingInfoList[*].accountId equal ignore case "all"```
AWS SSM documents are public This policy identifies list of SSM documents that are public and might allow unintended access. A public SSM document can expose valuable information about your account, resources, and internal processes. It is recommended to only share SSM documents to only few private AWS accounts based on the requirement. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To make an SSM document private follow the steps mentioned in below URL:\n1.Go to the AWS console Systems Manager Dashboard.\n2.If the AWS Systems Manager home page opens first, choose the menu icon to open the navigation pane, and then choose Documents in the navigation pane.\n\n3.In the documents list, choose the document you want to stop sharing, and then choose details. On the Permissions tab, verify that you're the document owner. Only a document owner can stop sharing a document.\n4.Choose Edit.\n5.Select Private option, and enter AWS accountId only with which this document can be shared(leave it blank if not willing to share now). \n6.Choose Save.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-sagemaker-endpoint-config' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; config from cloud.resource where api.name = 'aws-sagemaker-endpoint' AND json.rule = endpointStatus does not equal "Failed" as Z; filter '($.X.KmsKeyId does not exist or (($.X.KmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled) and $.X.KmsKeyId equals $.Y.keyMetadata.arn)) and ($.X.EndpointConfigName equals $.Z.endpointConfigName)' ; show X;```
AWS SageMaker endpoint data encryption at rest not configured with CMK This policy identifies AWS SageMaker Endpoints not configured with data encryption at rest. AWS SageMaker Endpoint configuration defines the resources and settings for deploying machine learning models to SageMaker endpoints. By default, SageMaker encryption uses transient keys if a KMS key is not specified, which does not provide the control and management benefits of AWS Customer Managed KMS Key. Enabling the encryption helps protect the integrity and confidentiality of the data on the storage volume attached to the ML compute instance that hosts the endpoint. It is recommended to set encryption at rest to mitigate the risk of unauthorized access and potential data breaches. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To ensure that SageMaker endpoint configuration with data encryption using the KMS key, you must create a new EndpointConfig by cloning the existing endpoint configuration used by the endpoint and update it with the required changes.\n\n1. Sign in to the AWS Management Console.\n2. Go to the SageMaker service dashboard at https://console.aws.amazon.com/sagemaker/.\n3. In the navigation panel, under Inference, choose Endpoint configurations.\n4. Select the SageMaker endpoint that is reported, Click on clone on top right corner.\n5. Give a name to the Endpoint configuration and choose the Encryption key. For AWS Managed Keys, enter a KMS key ARN. For customer-managed keys, choose one from the drop-down.\n6. Click Create endpoint configuration.\n\nTo update the endpoint using the endpoint configuration:\n\n1. Sign in to the AWS Management Console.\n2. Go to the SageMaker service dashboard at https://console.aws.amazon.com/sagemaker/.\n3. In the navigation panel, under Inference, choose Endpoints.\n4. Select the SageMaker endpoint that you want to examine, then click on it to access the resource configuration details under the "settings" tab.\n5. Scroll down to Endpoint Configuration Settings and click Change.\n6. choose to "use an existing endpoint configuration" and select the Endpoint configuration which is created earlier with encryption key specified.\n7. Click "Select endpoint configuration" and click "Update Endpoint" for changes to propagate..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1433,1433)"```
Alibaba Cloud Security group allow internet traffic to MS SQL port (1433) This policy identifies Security groups that allow inbound traffic on MS SQL port (1433) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1433, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.email is empty)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```
Azure Microsoft Defender for Cloud security contact additional email is not set This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has not set security contact additional email addresses. Microsoft Defender for Cloud emails the subscription owners whenever a high-severity alert is triggered for their subscription. Providing a security contact email address as an additional email address ensures that the proper people are aware of any potential compromise in order to mitigate the risk in a timely fashion. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. Enter a valid security contact email address (or multiple addresses separated by commas) in the 'Additional email addresses' field\n7. Select 'Save'.
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains "roles/editor" or roles[*] contains "roles/owner" as X; config from cloud.resource where api.name = 'gcloud-cloud-function' as Y; filter '$.Y.serviceAccountEmail equals $.X.user'; show Y;```
GCP Cloud Function has risky basic role assigned This policy identifies GCP Cloud Functions configured with the risky basic role. Basic roles are highly permissive roles that existed prior to the introduction of IAM and grant wide access over project to the grantee. To reduce the blast radius and defend against privilege escalations if the Cloud Function is compromised, it is recommended to follow the principle of least privilege and avoid use of basic roles. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: It is recommended to the principle of least privilege for granting access.\n\nTo assign desired service account to the cloud funtion, please refer to the URL given below:\nhttps://cloud.google.com/functions/docs/securing/function-identity#individual\n\nTo update priviledges granted to a service account, please refer to the URL given below:\nhttps://cloud.google.com/iam/docs/granting-changing-revoking-access.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = domainProcessingStatus equal ignore case active and (logPublishingOptions does not exist or logPublishingOptions.AUDIT_LOGS.enabled is false)```
AWS Opensearch domain audit logging disabled This policy identifies AWS Opensearch domains with audit logging disabled. Opensearch audit logs enable you to monitor user activity on your Elasticsearch clusters, such as authentication successes and failures, OpenSearch requests, index updates, and incoming search queries. It is recommended to enable audit logging for an Elasticsearch domain to audit activity in the domain. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS Opensearch domain with audit logs:\n\n1. Sign into the AWS console and navigate to the Opensearch Service Dashboard\n2. In the navigation pane, under 'Managed Clusters', select 'Domains'\n2. Choose the reported Elasticsearch domain\n3. On the Logs tab, select 'Audit logs' and choose 'Enable'.\n4. In the 'Set up audit logs' section, in the 'Select log group from CloudWatch logs' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n5. In 'Specify CloudWatch access policy', create new/Select an existing policy as per your requirement\n6. Click on 'Enable'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and settings.databaseFlags[*].name does not contain "user connections"'```
GCP SQL server instance database flag user connections is not set This policy identifies GCP SQL server instances where the database flag 'user connections' is not set. The user connections option specifies the maximum number of simultaneous user connections (value varies in range 10-32,767) that are allowed on an instance of SQL Server. The default is 0, which means that the maximum (32,767) user connections are allowed. It is recommended to set database flag user connections for SQL Server instance according to organization-defined value. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on the reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in the 'New database flag' section, choose the flag 'user connections' from the drop-down menu, and set the value an appropriate value(10-32,767)\n6. Click on DONE\n7. Click on SAVE \n8. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'\n.
```config from cloud.resource where cloud.type = 'azure' aND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists```
mkurter-testing--0002 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND cloud.accountgroup NOT IN ( 'AWS' ) AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists```
mkurter-testing-pcf-azure This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = "_DateTime.ageInDays(createTime) > 90"```
GCP API key not rotating in every 90 days This policy identifies GCP API keys for which the creation date is aged more than 90 days. Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services'\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. Click on 'REGENERATE KEY' to rotate the API key\n5. On the pop-up window click on 'REPLACE KEY'\n6. Validate the Creation date once it is updated..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and activeServicesCount equals 0```
AWS ECS cluster not configured with active services This policy identifies ECS clusters that are not configured with active services. ECS service enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. It is recommended to remove Idle ECS clusters to reduce the container attack surface or create new services for the reported ECS cluster. For details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete the reported idle ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/delete_cluster.html\n\nTo create new container services follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = (['properties.sslPolicy'] does not exist and ['properties.defaultPredefinedSslPolicy'] does not equal ignore case AppGwSslPolicy20220101) or (['properties.sslPolicy'].['policyType'] equal ignore case Predefined and (['properties.sslPolicy'].['policyName'] equal ignore case AppGwSslPolicy20150501 or ['properties.sslPolicy'].['policyName'] equal ignore case AppGwSslPolicy20170401)) or (['properties.sslPolicy'].['policyType'] equal ignore case Custom and (['properties.sslPolicy'].['minProtocolVersion'] equal ignore case TLSv1_0 or ['properties.sslPolicy'].['minProtocolVersion'] equal ignore case TLSv1_1))```
Azure Application Gateway is configured with SSL policy having TLS version 1.1 or lower This policy identifies the Application Gateway instances that are configured to use TLS versions 1.1 or lower as the minimum protocol version. The Application Gateway supports SSL encryption using multiple TLS versions and by default, it supports TLS version 1.0 as the minimum version. As a best practice set the minimum protocol version to TLSv1.2 or more (if you use custom SSL policy) or use the predefined policy use policy which has support TLSv1.2 or more. For more details: https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-ssl-policy-overview This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set the SSL policy with TLSv1.2 or more, refer below URL:\nhttps://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-configure-listener-specific-ssl-policy.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains functionapp and kind does not contain workflowapp and kind does not equal app and config.siteAuthEnabled is false```
Azure Function App authentication is off This policy identifies Azure Function App which has set authentication to off. Azure Function App Authentication is a feature that can prevent anonymous HTTP requests from reaching the API app, or authenticate those that have tokens before they reach the API app. If an anonymous request is received from a browser, Function App will redirect to a logon page. To handle the logon process, a choice from a set of identity providers can be made, or a custom authentication mechanism can be implemented. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Settings section, Click on 'Authentication'\n5. Click on 'Add identity provider'\n6. Select an identity provider from the dropdown and choose other parameters as per your requirement\n7. Click on 'Add'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (encryptionAtRestOptions.enabled is false or encryptionAtRestOptions does not exist)'```
AWS Elasticsearch domain Encryption for data at rest is disabled This policy identifies Elasticsearch domains for which encryption is disabled. Encryption of data at rest is required to prevent unauthorized users from accessing the sensitive information available on your Elasticsearch domains components. This may include all data of file systems, primary and replica indices, log files, memory swap files and automated snapshots. The Elasticsearch uses AWS KMS service to store and manage the encryption keys. It is highly recommended to implement encryption at rest when you are working with production data that have sensitive information, to protect from unauthorized access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Enabling the encryption feature on existing domains requires Elasticsearch 6.7 or later. If your Elasticsearch 6.7 or later, follow below steps to enable encryption on existing Elasticsearch domain:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Click on 'Actions' button, from drop-down select 'Modify encryptions'\n6.In Modify encryptions page, Select the 'Enable encryption of data at rest' checkbox and Choose KMS key as per your requirement. It is recommended to choose KMS CMKs instead of default KMS [Default(aws/es)]; to get more grannular control on your Elasticsearch domain data.\n7. Click on 'Submit'.\n\nIf your Elasticsearch is less than 6.7 version, then AWS Elasticsearch Domain encryption can be set only at the time of the creation of domain. So to fix this alert, create a new domain with encryption using KMS Keys and then migrate all required Elasticsearch domain data from the reported Elasticsearch domain to this newly created domain.\nTo set up the new Elasticsearch domain with encryption using KMS Key, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html\n\nTo delete reported ES domain, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-deleting.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true or resourcesVpcConfig.endpointPrivateAccess is false```
Copy of AWS EKS cluster endpoint access publicly enabled When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC). This policy checks your Kubernetes cluster endpoint access and triggers an alert if publicly enabled. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Enable private access to the Kubernetes API server so that all communication between your worker nodes and the API server stays within your VPC. Disable public access to your API server so that it's not accessible from the internet.\n\n1. Login to AWS Console\n2. Navigate to the Amazon EKS dashboard\n3. Choose the name of the cluster to display your cluster information\n4. Under Networking, choose 'Manage networking'\n5. Select 'Private' radio button\n6. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is true```
Azure Key Vault Key has no expiration date (RBAC Key vault) This policy identifies Azure Key Vault keys that do not have an expiration date for the RBAC Key vaults. As a best practice, set an expiration date for each key and rotate your keys regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].id' | xargs -I {} az role assignment create --assignee "<Object ID of Prisma Cloud Principal>" --role "Key Vault Reader" --scope {} This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Key vaults'.\n3. Select the Key vault where the key is stored.\n4. Select 'Keys', and select the key that you need to modify.\n5. Select the current version.\n6. Set the expiration date.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = "$.serverBlobAuditingPolicy.properties.retentionDays does not exist or $.serverBlobAuditingPolicy.properties.state equals Disabled"```
Azure SQL Server auditing is disabled Audit logs can help you find suspicious events, unusual activity, and trends to analyze database events. Auditing the SQL Server, at the server-level, enables you to track all new and existing databases on the server. This policy identifies SQL servers do not have auditing enabled. As a best practice, enable auditing on each SQL server so that the database are audited, regardless of the database auditing settings. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal.\n2. Select 'SQL servers', and select the SQL server instance you want to modify.\n3. Select 'Auditing', and set the status to 'On'.\n4. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists as X; count(X) less than 1```
AWS Security Hub is not enabled This policy identifies the AWS Security Hub that is not enabled in specific regions. AWS Security Hub is a centralized security management service by Amazon Web Services, providing a comprehensive view of your security posture and automating security checks across AWS accounts. Failure to enable AWS Security Hub in all regions may lead to limited visibility and compromised threat detection across your AWS environment. It is recommended to enable AWS Security Hub in all regions for consistent visibility and enhanced threat detection across your AWS environment. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS Security Hub, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Security, Identity, & Compliance', select 'Security Hub'\n4. When you open the Security Hub console for the first time, choose 'Go to Security Hub'\n5. On the welcome page, the 'Security standards' section lists the security standards that Security Hub supports\n6. Select the check box for a standard to enable it\n8. Choose 'Enable Security Hub'.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.encryption.status equal ignore case disabled```
test c p This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' as X; config from cloud.resource where api.name = 'gcloud-dns-policy' as Y; filter 'not($.Y.networks[*].networkUrl contains $.X.name and $.Y.enableLogging is true)'; show X;```
GCP VPC network not configured with DNS policy with logging enabled This policy identifies the GCP VPC networks which are not configured with DNS policy with logging enabled. Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC. These logs can be monitored for anomalous domain names and evaluated against threat intelligence. It is recommended to enable DNS logging for all the VPC networks. Note: For full capture of DNS, firewall must block egress UDP/53 (DNS) and TCP/443 (DNSover HTTPS) to prevent client from using external DNS name server for resolution. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To add DNS server policy with logging to a VPC network,\n\n1. Login to GCP console\n2. Navigate to service 'VPC network'(Left Panel)\n3. Click on the alerting VPC network\n4. Click on 'EDIT'\n5. Under 'DNS server policy' dropdown, select an available service policy or 'create a new server policy' as required\nLink: https://cloud.google.com/dns/docs/policies#creating \n6. Click on 'SAVE'\nTo enable logging to a DNS policy that is attached to a VPC follow the below reference,\n\n1. Login to GCP console\n2. Navigate to service 'VPC network'(Left Panel)\n3. Click on the alerting VPC network\n4. Click on the attached 'DNS server policy'\n5. Click on 'EDIT POLICY'\n6. Under section 'Logs' select 'On'\n7. Click on 'SAVE'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = (properties.roleDefinition.properties.type equals CustomRole and (properties.roleDefinition.properties.permissions[?any((actions[*] equals Microsoft.Authorization/locks/delete and actions[*] equals Microsoft.Authorization/locks/read and actions[*] equals Microsoft.Authorization/locks/write) or actions[*] equals Microsoft.Authorization/locks/*)] exists) and (properties.roleDefinition.properties.permissions[?any(notActions[*] equals Microsoft.Authorization/locks/delete or notActions[*] equals Microsoft.Authorization/locks/read or notActions[*] equals Microsoft.Authorization/locks/write or notActions[*] equals Microsoft.Authorization/locks/*)] does not exist)) as X; count(X) less than 1```
liron test custom policy #3 run + build policy This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.Policy.Statement[?any(Effect equals Allow and (Action anyStartWith sqs: or Action anyStartWith SQS:) and (Principal.AWS contains * or Principal equals *) and Condition does not exist)] exists```
AWS SQS queue access policy is overly permissive This policy identifies Simple Queue Service (SQS) queues that have an overly permissive access policy. It is highly recommended to have the least privileged access policy to protect the SQS queue from data leakage and unauthorized access. For more details: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to Simple Queue Service (SQS) dashboard\n4. Choose the reported Simple Queue Service (SQS) and choose 'Edit'\n5. Scroll to the 'Access policy' section\n6. Edit the access policy statements in the input box, Make sure the 'Principal' is not set to '*', which makes your SQS queues accessible to any anonymous users.\n7. When you finish configuring the access policy, choose 'Save'..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-user' AND json.rule = 'accessKeys[*] size > 1 and accessKeys[*].status all equal Active'```
Alibaba Cloud RAM user has more than one active access keys This policy identifies Resource Access Management (RAM) users who have more than one active access keys. RAM users having more than one key can lead to increased chances of accidental exposure. As a best security practice, it is recommended to delete unused access keys. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Users'\n4. Select the reported user\n5. In the 'Authentication' tab, under 'User AccessKeys'\n6. In the list of access keys, Make a note on the access keys which is not used or not required as per your requirements.\n7. Click on 'Delete'\n8. On the 'Delete AccessKey' popup window, select 'I am aware of the risk and confirm that the deletion' and click on 'Close'..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5432,5432)"```
Alibaba Cloud Security group allow internet traffic to PostgreSQL port (5432) This policy identifies Security groups that allow inbound traffic on PostgreSQL port (5432) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5432, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "containers-kubernetes" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance","namespace"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Kubernetes Service This policy identifies IBM Cloud Service IDs with overly permissive Kubernetes Administrative role. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, and click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'Kubernetes Service'.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..