query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.properties.subnet.id does not exist``` | Azure Machine Learning compute instance not configured inside virtual network
This policy identifies Azure Machine Learning compute instances that are not configured within a virtual network.
Azure Machine Learning compute instances outside a Virtual Network are exposed to external threats, as they may be publicly accessible. Placing the instance within a Virtual Network improves security by limiting access to trusted virtual machines and services within the same network. This ensures secure communication and blocks unauthorized public access.
As a security best practice, it is recommended to deploy the Azure Machine Learning compute instances inside a virtual network.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: Configuring an existing Azure Machine Learning compute instance inside a Virtual Network without deleting and recreating it is not supported. To ensure security, it is recommended to set up the compute instance within a Virtual Network from the start.\n\nTo create a new compute instance inside a Virtual Network:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Manage' section, click on the 'Compute'\n7. Click 'New' to create a new compute instance\n8. In the 'Security' tab, under the 'Virtual network' section, enable the 'Enable virtual network' to configure it within a Virtual network\n9. Select 'Review + Create' to create the compute instance. |
```config from cloud.resource where api.name = 'aws-waf-classic-global-web-acl-resource' as X; config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as Y; filter '$.X.webACL.webACLId equals $.Y.webACLId'; show Y;``` | AWS CloudFront not configured with AWS Web Application Firewall v2 (AWS WAFv2)
This policy identifies AWS CloudFront which is not configured with AWS Web Application Firewall v2 (AWS WAFv2). As a best practice, configure the AWS WAFv2 service on the CloudFront to protect against application-layer attacks. To block malicious requests to your CloudFront, define the block criteria in the WAFv2 web access control list (web ACL).
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. On 'Edit Distribution' page, from 'AWS WAF Web ACL' dropdown select WAFv2 ACL which you want to apply\nNote: In case no WAFv2 ACL found from 'AWS WAF Web ACL' dropdown list, Please follow below URL to create WAFv2 ACL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/web-acl-creating.html\n6. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains "aws:kms" or sseAlgorithm contains "aws:kms:dsse") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals "null")as Y; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn'; show X;``` | AWS S3 bucket encrypted with Customer Managed Key (CMK) is not enabled for regular rotation
This policy identifies Amazon S3 buckets that use Customer Managed Keys (CMKs) for encryption but are not enabled with key rotation. Amazon S3 bucket encryption key rotation failure can result in prolonged exposure of sensitive data and potential compliance violations. As a security best practice, it is important to rotate these keys periodically. This ensures that if the keys are compromised, the data in the underlying service remains secure with the new keys.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Recommendation:\n\nThe following steps are recommended to enable the automatic rotation of the KMS key used by the S3 bucket\n\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Under the 'Key rotation' tab on the navigated KMS key window, Enable 'Automatically rotate this CMK every year'.\n6. Click on Save.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(4333,4333) or destinationPortRanges[*] contains _Port.inRange(4333,4333) ))] exists``` | Azure Network Security Group allows all traffic on MSQL (TCP Port 4333)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on MSQL (TCP Port 4333). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MSQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and (acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (WriteAcp,Write,FullControl))] exists or acl.grantsAsList[?any(grantee equals AuthenticatedUsers and permission is member of (WriteAcp,Write,FullControl))] exists)) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Put or Action contains s3:Create or Action contains s3:Replicate or Action contains s3:Update or Action contains s3:Delete) and (Condition does not exist))] exists))) and websiteConfiguration does not exist``` | AWS S3 bucket publicly writable
This policy identifies the S3 buckets that are publicly writable by Put/Create/Update/Replicate/Write/Delete bucket operations. These permissions permit anyone, malicious or not, to Put/Create/Update/Replicate/Write/Delete bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public.
For more details:
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Under 'Access Control List', Click on 'Authenticated users group' and uncheck all items\nc. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to PUT/CREATE/REPLICATE/DELETE objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific PUT/CREATE/REPLICATE/DELETE functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5900,5900) or destinationPortRanges[*] contains _Port.inRange(5900,5900) ))] exists``` | Azure Network Security Group allows all traffic on VNC Server (TCP Port 5900)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on VNC Server (TCP Port 5900). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict VNC Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_statement_stats or settings.databaseFlags[?any(name contains log_statement_stats and value contains on)] exists)"``` | GCP PostgreSQL instance database flag log_statement_stats is not set to off
This policy identifies PostgreSQL database instances in which database flag log_statement_stats is not set to off. The log_statement_stats flag enables a crude profiling method for logging end-to-end performance statistics of a SQL query. This can be useful for troubleshooting but may increase the number of logs significantly and have performance overhead. It is recommended to set log_statement_stats as off.
Note: The flag 'log_statement_stats' cannot be enabled with other
module statistics (log_parser_stats, log_planner_stats, log_executor_stats).
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_statement_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_statement_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/publicIPAddresses/write" as X; count(X) less than 1``` | Azure Activity log alert for Create or update public IP address rule does not exist
This policy identifies the Azure accounts in which activity log alert for Create or update public IP address rule does not exist.
Creating an activity log alert for create or update public IP address rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. By enabling this monitoring, you get alerts whenever any changes are made to public IP address rules.
As a best practice, it is recommended to have a activity log alert for create or update public IP address rule to enhance network security monitoring and detect suspicious activities.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Public Ip Address (Public Ip Address)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' and json.rule = blockDeviceMappings[*].ebs.encrypted exists and blockDeviceMappings[*].ebs.encrypted is false``` | Enforce EBS Volume Encryption in EC2 Auto Scaling Configurations
This policy helps ensure that your AWS EC2 Auto Scaling Launch Configurations are using encrypted EBS volumes, which is a crucial security measure to protect sensitive data. By checking for the presence of the Encrypted field and verifying that it is set to false, the policy alerts you to any instances where encryption is not enabled, allowing you to take corrective action and maintain a secure cloud environment. Adhering to this policy helps you comply with best practices and regulatory requirements for data protection in your public cloud deployment.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.addonProfiles.httpapplicationrouting.enabled is true or properties.addonProfiles.httpApplicationRouting.enabled is true``` | Azure AKS cluster HTTP application routing enabled
HTTP application routing configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints. While this makes it easy to access applications that are deployed to your Azure AKS cluster, this add-on is not recommended for production use.
This policy checks your AKS cluster HTTP application routing add-on setting and alerts if enabled.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To disable HTTP application routing for your AKS cluster, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/http-application-routing#remove-http-routing. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-group-settings' and json.rule = values[?any( name equals LockoutThreshold and (value greater than 10 or value does not exist))] exists``` | Azure Microsoft Entra ID account lockout threshold greater than 10
This policy identifies if the account lockout threshold for Microsoft Entra ID (formerly Azure AD) accounts is configured to allow more than 10 failed login attempts before the account is locked out.
A high lockout threshold (greater than 10) increases the risk of brute-force or password spray attacks, where attackers can attempt multiple passwords over time without triggering account lockouts, leaving accounts vulnerable to unauthorized access. Setting the lockout threshold to a reasonable value (e.g., less than or equal to 10) balances usability and security by limiting the number of login attempts before an account is locked, reducing exposure to attacks while preventing frequent unnecessary lockouts for legitimate users.
As a security best practice, it is recommended to configure the account lockout threshold to less than or equal to 10.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under Manage, select Security\n4. Under Manage, select Authentication methods\n5. Under Manage, select Password protection\n6. Set the 'Lockout threshold' to 10 or fewer\n7. Click 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equal ignore case Succeeded AND (properties.enableRbacAuthorization does not exist or properties.enableRbacAuthorization is false)``` | Azure Key Vault Role Based Access control is disabled
This policy identifies Azure Key Vault instances where Role-Based Access Control (RBAC) is not enabled.
Without RBAC, managing access is less secure and can lead to improper access permissions, increasing the risk of unauthorized access to sensitive data. RBAC provides finer-grained access control, enabling secure and manageable permissions for key vault secrets, keys, and certificates. This allows for detailed permissions and the use of privileged identity management for enhanced security with Just-In-Time (JIT) access management.
As best practice, it is recommended to enable RBAC for all Azure Key Vaults to ensure secure and manageable access control.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: Setting Azure RBAC permission model invalidates all access policies permissions. It can cause outages when equivalent Azure roles aren’t assigned.\n\n1. Login to the Azure portal\n2. Select ‘All services’ > ‘Key vaults’\n3. Select the reported Key vault\n4. Select ‘Access configuration’ under the ‘Settings’ section\n5. Select ‘Azure role-based access control’ under ‘Permission model’ and click ‘Apply’ at the bottom of the page\n6. Next assign a Role to grant access to the Key vault\n - Select ‘Access control (IAM)’ from the left panel\n - Open the ‘Add role assignment’ pane\n - Select the appropriate role under ‘Role’ (e.g., ‘Key Vault Contributor’)\n - Assign the role to a user, group, or application by searching for the name or ID under ‘Select members’\n - Click 'Review + Assign' to apply the changes. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(10255,10255) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp or IPProtocol contains "all")))] exists as X; config from cloud.resource where api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING as Y; filter '$.X.network contains $.Y.networkConfig.network' ; show X;``` | GCP Firewall rule exposes GKE clusters by allowing all traffic on read-only port (10255)
This policy identifies GCP Firewall rule allowing all traffic on read-only port (10255) which exposes GKE clusters. In GKE, Kubelet exposes a read-only port 10255 which shows the configurations of all pods on the cluster at the /pods API endpoint. GKE itself does not expose this port to the Internet as the default project firewall configuration blocks external access. However, it is possible to inadvertently expose this port publicly on GKE clusters by creating a Google Compute Engine VPC firewall for GKE nodes that allows traffic from all source ranges on all the ports. This configuration publicly exposes all pod configurations, which might contain sensitive information.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: As port 10255 exposes sensitive information of GKE pod configuration it is recommended to disable this firewall rule. \nOtherwise, remove the overly permissive source IPs following below steps,\n\n1. Login to GCP Console\n2. Navigate to 'VPC Network'(Left Panel)\n3. Go to the 'Firewall' section (Left Panel)\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5500,5500) or destinationPortRanges[*] contains _Port.inRange(5500,5500) ))] exists``` | Azure Network Security Group allows all traffic on VNC Listener (TCP Port 5500)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on VNC Listener TCP port 5500. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict VNC Listener solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-acm-describe-certificate' as Y; filter '($.X.description.listenerDescriptions[*].listener.sslcertificateId contains $.Y.certificateArn and ((_DateTime.ageInDays($.Y.notAfter) > -90 and (_DateTime.ageInDays($.Y.notAfter) < 0 or _DateTime.ageInDays($.Y.notAfter) == 0)) or (_DateTime.ageInDays($.Y.notAfter) > 0)))'; show X;``` | AWS Elastic Load Balancer (ELB) with ACM certificate expired or expiring in 90 days
This policy identifies Elastic Load Balancers (ELB) which are using ACM certificates expired or expiring in 90 days. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. As a best practice, it is recommended to reimport expiring/expired certificates while preserving the ELB associations of the original certificate.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Reimport certificate'\n6. On the 'Import a certificate' page:\n6a. For 'Certificate body*', paste the PEM-encoded certificate to import\n6b. For 'Certificate private key*', paste the PEM-encoded, unencrypted private key that matches the SSL/TLS certificate public key\n6c. (Optional) For 'Certificate chain', paste the PEM-encoded certificate chain delivered\n6d. Click Review and import button to continue the process\n7. On the 'Review and import' page, review the imported certificate details then click on 'Import'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled and (properties.ipAllowlist does not exist or properties.ipAllowlist is empty) and properties.hbiWorkspace is true``` | Azure Machine learning workspace configured with high business impact data have unrestricted network access
This policy identifies Azure Machine learning workspaces configured with high business impact data with unrestricted network access.
Overly permissive public network access allows access to resource through the internet using a public IP address and that resource having High Business Impact (HBI) data could lead to sensitive data exposure.
As a best practice, it is recommended to limit access to your workspace and endpoint to specific public internet IP addresses, ensuring that only authorized entities can access them according to business requirements.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: To restirct internet IP ranges on your existing Machine learning workspace, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link?view=azureml-api-2&tabs=azure-portal#enable-public-access-only-from-internet-ip-ranges-preview. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = rotationEnabled is true and owningService is not member of (appflow, databrew, datasync, directconnect, events, opsworks-cm, rds, sqlworkbench) and rotationRules.automaticallyAfterDays exists and rotationRules.automaticallyAfterDays greater than 90``` | AWS Secrets Manager secret not configured to rotate within 90 days
This policy identifies the AWS Secrets Manager secret is not configured to automatically rotate the secret within 90 days.
Rotating secrets minimizes the risk of compromised credentials and reduces exposure to potential threats. Failing to rotate secrets increases the risk of security breaches and prolonged exposure to threats.
It is recommended to configure automatic rotation in AWS Secrets Manager to replace long-term secrets with short-term ones, reducing the risk of compromise.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To set up automatic rotation for Amazon RDS, Amazon Aurora, Amazon Redshift, or Amazon DocumentDB secrets, refer to the below link:\n\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html\n\nTo set up automatic rotation for non-database AWS Secrets Manager secrets, refer to the below link:\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-other.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-organization-asset-group-member' as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/editor or roles[*] contains roles/owner or roles[*] contains roles/appengine.* or roles[*] contains roles/browser or roles[*] contains roles/compute.networkAdmin or roles[*] contains roles/cloudtpu.serviceAgent or roles[*] contains roles/composer.serviceAgent or roles[*] contains roles/composer.ServiceAgentV2Ext or roles[*] contains roles/container.serviceAgent or roles[*] contains roles/dataflow.serviceAgent)' as Y; filter '($.X.groupKey.id contains $.Y.user)'; show Y;``` | pcsup-13966-ss-policy
This is applicable to gcp cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any((name equals SqlServers and properties.pricingTier does not equal Standard) or (name equals CosmosDbs and properties.pricingTier does not equal Standard) or (name equals OpenSourceRelationalDatabases and properties.pricingTier does not equal Standard) or (name equals SqlServerVirtualMachines and properties.pricingTier does not equal Standard))] exists``` | Azure Microsoft Defender for Cloud set to Off for Databases
This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Databases set to Off. Enabling Azure Defender for Cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Defender for Databases in Microsoft Defender for Cloud allows you to protect your entire database estate with attack detection and threat response for the most popular database types in Azure. It is highly recommended to enable Azure Defender for Databases.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Set 'Databases' Status to 'On'\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-neptune-db-cluster' AND json.rule = Status contains available and DeletionProtection is false``` | AWS Neptune cluster deletion protection is disabled
This policy identifies AWS Neptune clusters for which deletion protection is disabled. Enabling deletion protection for Neptune clusters prevents irreversible data loss resulting from accidental or malicious operations.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Neptune Dashboard\n4. Select the reported Neptune cluster\n5. Click on 'Modify' from top\n6. Under 'Deletion protection' select 'Enable deletion protection'\n7. Click on 'Continue'\n8. Schedule the modifications and click on 'Modify cluster' \n . |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains MYSQL and (settings.databaseFlags[?(@.name=='local_infile')] does not exist or settings.databaseFlags[?(@.name=='local_infile')].value equals on)"``` | GCP MySQL instance with local_infile database flag is not disabled
This policy identifies MySQL instances in which local_infile database flag is not disabled. The local_infile flag controls the server-side LOCAL capability for LOAD DATA statements. Based on the settings in local_infile server refuses or permits local data loading by clients. Disabling the local_infile flag setting, would disable the local data loading by clients that have LOCAL enabled on the client side.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Select the MySQL instance for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. Click 'Add item', choose the flag 'local_infile' from the drop-down menu and set the value to 'Off'\nOR\nIf 'local_infile' database flag is already set to 'On', from the drop-down menu set the value to 'Off'\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[*].Principal.AWS exists and policy.Statement[*].Effect contains "Allow"``` | priyanka tst
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_attack_path_policy_as_child_policies_ss_finding_2
Description-27d6b8cf-e576-4828-b0eb-0c0627c2e05f
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = 'statusEvents[?any(_DateTime.ageInDays(notBefore) > -7 and (_DateTime.ageInDays(notBefore) < 0 or (description exists and description does not contain "Completed")))] exists'``` | AWS EC2 Instance Scheduled Events
This policy identifies your Amazon EC2 instances which have a scheduled event. AWS can schedule events for your instances, such as a reboot, stop/start, or retirement. These events do not occur frequently. If one of your instances will be affected by a scheduled event, AWS sends an email to the email address that’s associated with your AWS account prior to the scheduled event, with details about the event, including the start and end date. Depending on the event, you might be able to take action to control the timing of the event. If AWS scheduled event is planned for within 7 days, this signature triggers an alert.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To remediate this alert, review and follow the steps at AWS: Scheduled Events for Your Instances as needed.\nFor more info: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_connections')] does not exist or settings.databaseFlags[?(@.name=='log_connections')].value equals off)"``` | GCP PostgreSQL instance database flag log_connections is disabled
This policy identifies PostgreSQL type SQL instances for which the log_connections database flag is disabled. PostgreSQL does not log attempted connections by default. Enabling the log_connections setting will create log entries for each attempted connection as well as successful completion of client authentication which can be useful in troubleshooting issues and to determine any unusual connection attempts to the server.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the PostgreSQL instance ID for which you want to enable the database flag from the list\n4. Click on 'Edit'\nNOTE: If the instance is stopped, You need to START the instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Customize your instance'\n6. To set a flag that has not been set on the instance before, click 'Add FLAG', choose the flag 'log_connections' from the drop-down menu and set the value as 'on'.\n7. If it is already set to 'off' for 'log_connections', from the drop-down menu set the value as 'on'\n8. Click on 'DONE' for the added/edited flag.\n9. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and policyName contains AWSCloudShellFullAccess and (entities.policyRoles[*].roleName exists or entities.policyUsers[*].userName exists or entities.policyGroups[*].groupName exists)``` | AWS IAM AWSCloudShellFullAccess policy is attached to IAM roles, users, or IAM groups
This policy identifies the AWSCloudShellFullAccess policy attached to IAM roles, users, or IAM groups. AWS CloudShell is a convenient way of running CLI commands against AWS services. The 'AWSCloudShellFullAccess' IAM policy, providing unrestricted CloudShell access, poses a risk of data exfiltration, allowing malicious admins to exploit file upload/download capabilities for unauthorized data transfer. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the IAM console at https://console.aws.amazon.com/iam/\n2. In the left pane, select Policies\n3. Search for and select AWSCloudShellFullAccess\n4. On the Entities attached tab, for each item, check the box and select Detach. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'``` | Bobby run and build
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action equals lambda:* or Action[*] equals lambda:*) and (Resource equals * or Resource[*] equals *) and Condition does not exist)] exists``` | AWS IAM policy overly permissive to Lambda service
This policy identifies the IAM policies that are overly permissive to Lambda service. AWS provides serverless computational functionality through their Lambda service. Serverless functions allow organizations to run code for applications or backend services without provisioning virtual machines or management servers. It is recommended to follow the principle of least privileges, ensuring that only restricted Lambda services for restricted resources.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service\n3. Click on the 'Policies' in left hand panel and Click on the reported IAM policy\n4. Under Permissions tab, Change the element of the policy document to be more restrictive so that it only allows restricted Lambda permissions on selected resources instead of wildcards (Lambda:* and Resource:*) OR Put condition statement with least privilege access.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case "/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace"``` | test again - delete it
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-composer-environment' AND json.rule = state equals "RUNNING" and config.webServerNetworkAccessControl.allowedIpRanges[?any( value equals "0.0.0.0/0" or value equals "::0/0" )] exists ``` | GCP Composer environment web server network access control allows access from all IP addresses
This policy identifies GCP Composer environments with web server network access control that allows access from all IP addresses.
Web server network access control defines which IP addresses will have access to the Airflow web server. By default, web server network access control is set to allow all connections from the public internet. Allowing all traffic to the composer environment may allow a bad actor to brute force their way into the system and potentially get access to the entire network.
As a best practice, restrict traffic solely from known IP addresses and limit access to known hosts, services, or specific entities only.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure web server network access of an existing Cloud Composer 1 and Cloud Composer 2 environments, follow the steps given below:\n1. Login to the GCP console\n2. Navigate to the 'Composer' service (Left Panel)\n3. Click on the alerting composer environment\n4. Click on the 'ENVIRONMENT CONFIGURATION' tab\n5. Under 'Network configuration', click on the 'EDIT' button for the 'Web server access control' setting\n6. Select 'Allow access only from specific IP addresses'\n7. Add the desired IPs and IP ranges to be allowed.\n8. Click the 'Save' button.\n\nTo configure web server network access of a new Cloud Composer 1 environment, please refer to the URLs given below:\nhttps://cloud.google.com/composer/docs/how-to/managing/creating#web-server-access\n\nTo configure web server network access of a new Cloud Composer 2 environment, please refer to the URLs given below:\nhttps://cloud.google.com/composer/docs/composer-2/create-environments#web-server-access\n\nNote: Cloud Composer 1 is nearing the end of support. The creation of new Cloud Composer 1 environments might be restricted. Further, updates to the existing Cloud Composer 1 environment may be restricted. In such cases, it is recommended to migrate to Cloud Composer 2. To migrate to Cloud Composer 2, please refer to the URLs given below and configure web server network access to limit the access for the new environment:\nhttps://cloud.google.com/composer/docs/migrate-composer-2-snapshots-af-2. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'``` | Copy 2 of Bobby Copy of AWS Access logging not enabled on S3 buckets
Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = '((_DateTime.ageInDays($.properties.updatedOn) < 60) and (properties.principalType contains User))'``` | llatorre - RoleAssigment v5
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.networkConfig.enableIntraNodeVisibility does not exist or $.networkConfig.enableIntraNodeVisibility is false``` | GCP Kubernetes cluster intra-node visibility disabled
With Intranode Visibility, all network traffic in your cluster is seen by the Google Cloud Platform network. This means you can see flow logs for all traffic between Pods, including traffic between Pods on the same node. And you can create firewall rules that apply to all traffic between Pods.
This policy checks your cluster's intra-node visibility feature and generates an alert if it's disabled.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Upgrade your cluster to use Intranode Visibility.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. Select Enabled under Intranode visibility.\n4. Click Save to modify the cluster.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-neptune-db-cluster' AND json.rule = Status equals "available" and (BackupRetentionPeriod does not exist or BackupRetentionPeriod less than 7)``` | AWS Neptune DB clusters have backup retention period less than 7 days
This policy identifies Amazon Neptune DB clusters lacking sufficient backup retention tenure.
AWS Neptune DB is a fully managed graph database service. The backup retention period denotes the duration for storing automated backups of the Neptune DB clusters. Inadequate retention periods heighten the risk of data loss, and compliance issues, and hinder effective recovery in security breaches or system failures.
It is recommended to ensure a backup retention period of at least 7 days or according to your business and compliance requirements.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To modify an Amazon Neptune DB cluster's backup retention period, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, under 'Database', select 'Neptune'\n4. Under ‘Databases', select 'Clusters' and choose the reported cluster name\n5. Click 'Modify' from the top right corner \n6. Under the 'Additional settings' section, Click the 'Show more' dropdown \n7. Select the desired backup retention period in days from the 'Backup retention period' drop-down menu based on your business or compliance requirements \n8. Click 'Next' to review the summary of your changes \n9. Choose either 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your scheduling preference\n10. Click on 'Submit' to implement the changes. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(3389,3389)"``` | Alibaba Cloud Security group allow internet traffic to RDP port (3389)
This policy identifies Security groups that allow inbound traffic on RDP port (3389) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 3389, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status contains VALIDATION_TIMED_OUT or status contains FAILED'``` | AWS Certificate Manager (ACM) has invalid or failed certificate
This policy identifies certificates in ACM which are either in Invalid or Failed state. If the ACM certificate is not validated within 72 hours, it becomes Invalid. An ACM certificate fails when,
- the certificate is requested for invalid public domains
- the certificate is requested for domains which are not allowed
- missing contact information
- typographical errors
In such cases (Invalid or Failed certificate), you will have to request for a new certificate. It is strongly recommended to delete the certificates which are in failed or invalid state.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To delete Certificates: \n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'resourceLabels does not exist or resourceLabels.[*] is empty'``` | GCP Kubernetes Engine Clusters without any label information
This policy identifies all Kubernetes Engine Clusters which do not have labels. Having a cluster label helps you identify and categorize Kubernetes clusters.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. From the list of clusters, choose the reported cluster\n5. Click on 'SHOW INFO PANEL' button\n6. Click on 'Add Label'\n7. Specify customized data for Key and Value\n8. Click on Save. |
```config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = blobAuditPolicy.properties.state equals Disabled or blobAuditPolicy does not exist or blobAuditPolicy is empty as X; config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = serverBlobAuditingPolicy.properties.state equals Disabled or serverBlobAuditingPolicy does not exist or serverBlobAuditingPolicy is empty as Y; filter '$.X.blobAuditPolicy.id contains $.Y.sqlServer.name'; show X;``` | Azure SQL database auditing is disabled
This policy identifies SQL databases in which auditing is set to Off. Database events are tracked by the Auditing feature and the events are written to an audit log in your Audit log destinations. This process helps you to monitor database activity, and get insight into anomalies that could indicate business concerns or suspected security violations.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended that you enable only server-level auditing and leave the database-level auditing disabled for all databases.\n\nTo enable auditing at server level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database and SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Select 'Auditing' under 'Security' section, and set the status to 'On' and choose any Audit log destinations.\n5. Click on 'Save'\n\nIt is recommended to avoid enabling both server auditing and database blob auditing together, unless:\nIf you want to use a different storage account, retention period or Log Analytics Workspace for a specific database or want to use for audit event types or categories for a specific database that differ from the rest of the databases on the server.\nTo enable auditing at database level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database\n3. Select 'SQL databases', Click on the SQL database instance you wanted to modify\n4. Select 'Auditing' under 'Security' section, and set the status to 'On' and choose any Audit log destinations.\n5. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-ecs-cluster' and json.rule = configuration.executeCommandConfiguration.logConfiguration.cloudWatchEncryptionEnabled exists and configuration.executeCommandConfiguration.logConfiguration.cloudWatchEncryptionEnabled is false``` | ECS Cluster CloudWatch Logs Encryption Disabled
This policy alerts you when an AWS ECS cluster is configured with CloudWatch logs encryption disabled, potentially exposing sensitive information. By enforcing encryption on CloudWatch logs, you can enhance the security of your data and maintain compliance with regulatory requirements. Ensure that you enable encryption for CloudWatch logs to protect your ECS cluster from unauthorized access and safeguard your critical information.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' and json.rule = terminationProtected exists and terminationProtected is false``` | EMR Cluster Termination Protection Enforcement
This policy alerts you when an AWS Elastic MapReduce (EMR) cluster is configured without termination protection, which could potentially expose your cluster to accidental terminations or unauthorized changes. By enabling termination protection, you can safeguard your EMR clusters against unintended shutdowns and ensure the continuity of your data processing tasks, thereby enhancing the overall security and reliability of your cloud environment.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "secrets-manager" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;``` | IBM Cloud user with IAM policies provide administrative privileges for Secrets Manager service
This policy identifies IBM Cloud users with administrator role permission for the Secrets Manager service. Users with admin access will be able to perform all platform tasks for Secrets Manager, including the creation, modification, and deletion of Secrets Manager service instances, as well as the assignment of access policies to other users. On Secret Manager, there is a chance that sensitive data might be exposed in the underlying service if a user with administrative rights is compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Secrets Manager' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 445 or fromPort == 445) or (toPort > 445 and fromPort < 445)))] exists)``` | AWS Security Group allows all ingress traffic on CIFS port (445)
This policy identifies AWS Security groups that allow all traffic on port 445 used by Common Internet File System (CIFS).
Common Internet File System (CIFS) is a network file-sharing protocol that allows systems to share files over a network. unrestricted CIFS access can expose your data to unauthorized users, leading to potential security risks.
It is recommended to restrict CIFS port 445 access to only trusted networks to prevent unauthorized access and data breaches.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To restrict the traffic on the security group to known IP/CIDR range, perform the following actions:\n\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. On the left-hand panel, click on 'Security Groups' under the 'Security' section \n4. Select the 'Security Group' that is reported.\n4. Click on the 'Edit Inbound Rule'.\n5. in the 'Edit inbound rules' window, remove or restric the CIDR to trusted IP on the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 445 (or range containing 445)\n6. Click 'Save rules' to save.\n\nNote: Before making any changes, please check the impact on your applications/services.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = deployment.platform_options.disk_encryption_key_crn is empty``` | IBM Cloud MySQL Database disk encryption is not enabled with customer managed keys
This policy identifies IBM Cloud MySQL Databases with default disk encryption. Using customer managed keys will increase significant control where keys are managed by customers. It is recommended to use customer managed keys for disk encryption which provides customer control over the lifecycle of the keys.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: MySQL database disk encryption can be enabled with Customer managed keys only at the time of\ncreation.\n\nPlease use below link to provide MySQL service to KMS service authorization if not authorized already;\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-key-protect&interface=ui#granting-service-auth\n\nPlease use below link to provision a KMS instance with a key to use for encryption if not provisioned:\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial#create-keys\n\nPlease follow below steps to create a new MySQL deployment from backup of vulnerable MySQL deployment:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select MySQL database reported in the alert.\n3. In the left navigation pane, navigate to 'Backups and restore', under 'Available Backups' section click on 'Create backup' to get latest backup of the database.\n4. Under 'Available Backups' tab, click on three dots on the right corner of a row containing latest backup and click on 'Restore backup'.\n5. On create a new Database for MySQL from backup page, select all the configuration as per the requirement.\n6. Under 'Encryption' section, under 'KMS Instance' please select a KMS instance and a key from the instance to use for encryption.\n7. Click on 'Restore backup'.\n\nPlease follow below steps to delete the reported database deployment :\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list'.\n3. Select your deployment. Next, by using the stacked three-dot menu icon , choose Delete from the drop list.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = '(managedBy does not exist or managedBy is empty) and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of ("EncryptionAtRestWithCustomerKey", "EncryptionAtRestWithPlatformAndCustomerKeys")'``` | Azure disk is unattached and is encrypted with the default encryption key instead of ADE/CMK
This policy identifies the disks which are unattached and are encrypted with default encryption instead of ADE/CMK. Azure encrypts disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: If data stored in the disk is no longer useful, refer to Azure documentation to delete unattached data disks at:\nAPI: https://docs.microsoft.com/en-us/rest/api/compute/disks/delete\nCLI: https://docs.microsoft.com/en-us/cli/azure/disk?view=azure-cli-latest#az-disk-delete\n\nIf data stored in the disk is important, To enable SSE with Azure Disk Encryption [SSE with PMK+ADE] disk needs to be attached to VM.\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned. Once encryption is done, Un-attach the disk form the VM using azure portal / CLI.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal?toc=/azure/virtual-machines/linux/toc.json&bc=/azure/virtual-machines/linux/breadcrumb/toc.json#enable-on-an-existing-disk. |
```config from cloud.resource where api.name = 'ibm-vpc-network-vpn-gateway' AND json.rule = status equal ignore case available as X; config from cloud.resource where api.name = 'ibm-vpc-network-vpn-ipsec-policy' AND json.rule = pfs equals disabled as Y; filter '$.X.connections[*].id contains $.Y.connections[*].id'; show X;``` | IBM Cloud VPN Connections for VPC has an IPsec policy that have Perfect Forward Secrecy (PFS) disabled
This policy identifies IBM Cloud VPN Gateway with connections with IPsec policy that has Perfect Forward Secrecy disabled. Perfect Forward Secrecy is an encryption system that changes the keys used to encrypt and decrypt information frequently and automatically. This ensures that derived session keys are not compromised if one of the private keys is compromised in the future. It is recommended to enable Perfect Forward Secrecy.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'VPNs'\n3. Select the 'Site-to-site gateways' and select the gateway reported in the alert.\n4. In the Gateway 'Overview' page, under 'VPN connections', please note down the 'IPsec policy' name for each connection\n5. From left navigation pane select 'VPNs', under 'Site-to-site gateways' select 'IPsec policies'.\n6. Select required region, and perform below steps for all above noted down IPsec policies.\n7. For each policy click on 'elipsis' menu icon on the right and select 'Edit'.\n8. In 'Edit IPsec policy' page, slide the 'Perfect Forward Secrecy' feature to enabled.\n9. Click on 'Save'. |
```config from cloud.resource where api.name = 'azure-recovery-service-backup-protected-item' AND json.rule = properties.workloadType equal ignore case VM as X; config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = powerState contains running as Y; filter 'not $.Y.id equal ignore case $.X.properties.virtualMachineId'; show Y;``` | Azure Virtual Machine not protected with Azure Backup
This policy identifies Azure Virtual Machines that are not protected by Azure Backup.
Without Azure Backup, VMs are at risk of data loss due to accidental deletion, corruption, or ransomware attacks. Unprotected VMs may also not comply with organizational data retention policies and regulatory requirements.
As a best practice, it is recommended to configure Azure Backup for all VMs to ensure data protection and enable recovery options in case of unexpected failures or incidents.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Virtual machines'.\n2. Select 'Virtual machines'.\n3. Select the reported Virtual machine.\n4. Under 'Backup + disaster recovery' select 'Backup'.\n5. In the 'Backup' pane, select a 'Recovery Services vault'. If no vault exists, click 'Create new' to make a new vault.\n6. Choose the appropriate 'Policy sub type'. It's recommended to select 'Enhanced'.\n7. Next, select or create a 'Backup Policy' that defines when backups will run and how long they will be kept.\n8. From the 'Disks' dropdown, check all the disks you want to back up. Also, check the 'Include future disks' box to ensure new disks are automatically included.\n9. Click 'Enable Backup'.. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and ($.X.filterPattern does not contain "userIdentity.type!=" or $.X.filterPattern does not contain "userIdentity.type !=") and ($.X.filterPattern contains "userIdentity.type =" or $.X.filterPattern contains "userIdentity.type=") and ($.X.filterPattern contains "userIdentity.invokedBy NOT EXISTS") and ($.X.filterPattern contains "eventType!=" or $.X.filterPattern contains "eventType !=") and ($.X.filterPattern contains root or $.X.filterPattern contains Root) and ($.X.filterPattern contains AwsServiceEvent) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for usage of root account
This policy identifies the AWS regions that do not have a log metric filter and alarm for usage of a root account. Monitoring for root account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce its use it. Failure to monitor root account logins may result in a lack of visibility into unauthorized use or attempts to access the root account, posing potential security risks to your AWS environment. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.
NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster' AND json.rule = lifecycleState equal ignore case ACTIVE and endpointConfig exists and (endpointConfig.nsgIds does not exist or endpointConfig.nsgIds equal ignore case "null" or endpointConfig.nsgIds is empty)``` | OCI Kubernetes Engine Cluster endpoint is not configured with Network Security Groups
This policy identifies Kubernetes Engine Clusters endpoint that are not configured with Network Security Groups. Network security groups give fine-grained control of resources and help in restricting network access to your cluster node pools. It is recommended to restrict access to the Cluster node pools by configuring network security groups.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> Kubernetes Clusters (OKE)\n3. Click on the reported Kubernetes Clusters\n4. Click on 'Edit'\n5. On 'Edit cluster' page, Select the restrictive Network Security Group by selecting 'Use network security groups to control traffic' option under 'Kubernetes API server endpoint' section.\nNOTE: Before you update cluster endpoint with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement.\n6. Click on 'Save' button. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' as X; config from cloud.resource where api.name = 'oci-block-storage-volume-backup' as Y; filter 'not($.X.id equals $.Y.volumeId)'; show X;``` | OCI Block Storage Block Volume is not restorable
This policy identifies the OCI Block Storage Volumes that are not restorable. It is recommended to have backups on each block volume, that the block volume can be restored during data loss events.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Block Volume Backups from the Resources pane\n5. Click on Create Block Volume Backup (To create the back up). |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Sql/servers/firewallRules/write" as X; count(X) less than 1``` | Azure Activity log alert for Create or update SQL server firewall rule does not exist
This policy identifies the Azure accounts in which activity log alert for Create or update SQL server firewall rule does not exist. Creating an activity log alert for Create or update SQL server firewall rule gives insight into SQL server firewall rule access changes and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create/Update server firewall rule (Microsoft.Sql/servers/firewallRules)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_disconnections')] does not exist or settings.databaseFlags[?(@.name=='log_disconnections')].value equals off)"``` | GCP PostgreSQL instance database flag log_disconnections is disabled
This policy identifies PostgreSQL type SQL instances for which the log_disconnections database flag is disabled. Enabling the log_disconnections setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the PostgreSQL instance ID for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. To set a flag that has not been set on the instance before, click 'Add item', choose the flag 'log_disconnections' from the drop-down menu and set the value as 'on'.\n7. If it is already set to 'off' for 'log_disconnections', from the drop-down menu set the value as 'on'\n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = lifecycleState equal ignore case ACTIVE and capabilities.canUseConsolePassword is true and isMfaActivated is false``` | Copy of OCI MFA is disabled for IAM users
This policy identifies Identify Access Management (IAM) users for whom Multi Factor Authentication (MFA) is disabled. As a best practice, enable MFA to add an extra layer of protection for increased security of your OCI user’s identity and complete the sign-in process.
This is applicable to oci cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MFA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu\n3. Select Users from Identity menu.\n4. Click on each non-complaint user.\n5. Click on Enable Multi-Factor Authentication.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus equals SUCCESS and recordingGroup.allSupported is true' as X; count(X) less than 1``` | AWS Config Recording is disabled
AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. AWS config uses configuration recorder to detect changes in your resource configurations and capture these changes as configuration items. It continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. This policy generates alerts when AWS Config recorder is not enabled.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Management Console\n2. Select the specific region from the top down, for which the alert is generated\n3. Navigate to service 'Config' from the 'Services' dropdown.\nIf AWS Config set up exists,\na. Go to Settings\nb. Click on 'Turn On' button under 'Recording is Off' section,\nc. provide required information for bucket and role with proper permission\nIf AWS Config set up doesn't exist\na. Click on 'Get Started'\nb. For Step 1, Tick the check box for 'Record all resources supported in this region' under section 'Resource types to record'\nc. Under section 'Amazon S3 bucket', select bucket with permission to Config services\nd. Under section 'AWS Config role', select a role with permission to Config services\ne. Click on 'Next'\nf. For Step 2, Select required rule and click on 'Next' otherwise click on 'Skip'\ng. For Step 3, Review the created 'Settings' and click on 'Confirm'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(445,445) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on Microsoft-DS port (445)
This policy identifies GCP Firewall rules which allow all inbound traffic on Microsoft-DS port (445). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the Microsoft-DS port (445) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'deleteAutoSnapshot is true'``` | Alibaba Cloud data disk is configured with delete automatic snapshots feature
This policy identifies data disks that are configured with delete automatic snapshots feature. Disabling the delete automatic snapshots while releasing disk feature prevents the irreversible data loss from accidental or malicious operations.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Select the reported disk\n5. Select More and click on Modify Disk Property\n6. On Modify Disk Property popup window, Uncheck 'Delete Automatic Snapshots While Releasing Disk' checkbox\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-code-build-project' AND json.rule = not(logsConfig.cloudWatchLogs.status equal ignore case enabled or logsConfig.s3Logs.status equal ignore case enabled)``` | AWS CodeBuild project not configured with logging configuration
This policy identifies AWS CodeBuild project environments without a logging configuration.
AWS CodeBuild is a fully managed service for building, testing, and deploying code. Logging is a crucial security feature that allows for future forensic work in the event of a security incident. Correlating abnormalities in CodeBuild projects with threat detections helps boost confidence in their accuracy.
It is recommended to enable logging configuration on CodeBuild projects for monitoring and troubleshooting purposes.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console. Navigate to the CodeBuild service\n2. In the left navigation pane, select 'Build Projects' under 'Build'\n3. Go to your AWS CodeBuild project\n4. Select the 'Project details' tab, and under the 'Logs' section, select 'Edit'\n5. Under the 'Edit Logs' page, based on the requirement, select either 'CloudWatch logs' or 'S3 logs'\n6. For CloudWatch logging, provide a log group name\n7. For S3 logging, provide the bucket name\n8. Click on 'Update logs'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-app-engine-application' AND json.rule = servingStatus equals SERVING and (iap does not exist or iap.enabled does not exist or iap.enabled is false)``` | GCP App Engine Identity-Aware Proxy is disabled
This policy identifies GCP App Engine applications for which Identity-Aware Proxy(IAP) is disabled. IAP is used to enforce access control policies for applications and resources. It works with signed headers or the App Engine standard environment Users API to secure your app. It is recommended to enable Identity-Aware Proxy for securing the App engine.
Reference: https://cloud.google.com/iap/docs/concepts-overview
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: To enabled IAP for a GCP project follow the below steps provided,\n\nLink: https://cloud.google.com/iap/docs/app-engine-quickstart#enabling_iap. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = 'attributes.KmsMasterKeyId exists and attributes.KmsMasterKeyId contains alias/aws/sqs'``` | AWS SQS queue encryption using default KMS key instead of CMK
This policy identifies SQS queues which are encrypted with default KMS keys and not with Customer Master Keys(CMKs). It is a best practice to use customer managed Master Keys to encrypt your SQS queue messages. It gives you full control over the encrypted messages data.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to Simple Queue Service (SQS) dashboard\n4. Choose the reported Simple Queue Service (SQS)\n5. Click on 'Queue Actions' and Choose 'Configure Queue' from the dropdown \n6. On 'Configure' popup, Under 'Server-Side Encryption (SSE) Settings' section; Choose an 'AWS KMS Customer Master Key (CMK)' from the drop-down list or copy existing key ARN instead of (Default) alias/aws/sqs key.\n7. Click on 'Save Changes'. |
```config from cloud.resource where api.name = 'aws-ec2-elastic-address' and resource.status = Deleted AND json.rule = domain exists``` | Moses Policy Test 3
Test 3
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='connection_throttling')].properties.value equals OFF or configurations.value[?(@.name=='connection_throttling')].properties.value equals off"``` | Azure PostgreSQL database server with connection throttling parameter is disabled
This policy identifies PostgreSQL database servers for which server parameter is not set for connection throttling. Enabling connection_throttling helps the PostgreSQL Database to Set the verbosity of logged messages which in turn generates query and error logs with respect to concurrent connections, that could lead to a successful Denial of Service (DoS) attack by exhausting connection resources. A system can also fail or be degraded by an overload of legitimate users. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'connection_throttling' and set it to on\n6. Click on 'Save' button from top menu to save the change.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or minimumPasswordLength < 14 or minimumPasswordLength does not exist'``` | AWS IAM password policy does not have a minimum of 14 characters
Checks to ensure that IAM password policy requires minimum of 14 characters. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. In the 'Minimum password length' field, put 14 or more (As per preference).\n4. Click on 'Apply password policy'. |
```config from cloud.resource where api.name = 'azure-virtual-desktop-session-host' AND json.rule = session-hosts[*] is not empty and session-hosts[*].properties.resourceId exists as X; config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case "PowerState/running" as Y; filter '$.X.session-hosts[*].properties.resourceId equal ignore case $.Y.id and ($.Y.identity does not exist or $.Y.identity.type equal ignore case None)'; show Y;``` | Azure Virtual Desktop session host is not configured with managed identity
This policy identifies Virtual Desktop session hosts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Virtual Desktop session hosts.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 22 or fromPort == 22) or (toPort > 22 and fromPort < 22)))] exists)``` | AWS Security Group allows all traffic on SSH port (22)
This policy identifies Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 (or range containing 22). |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/securitySolutions/delete" as X; count(X) less than 1``` | Azure Activity log alert for Delete security solution does not exist
This policy identifies the Azure accounts in which activity log alert for Delete security solution does not exist. Creating an activity log alert for Delete security solution gives insight into changes to the active security solutions and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Security Solutions (Microsoft.Security/securitySolutions)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-cloudtrail-get-trail-status' as Y; filter '$.X.name equals $.Y.trail and $.Y.status.isLogging is false'; show X;``` | AWS CloudTrail logging is disabled
This policy identifies the CloudTrails in which logging is disabled. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on logging for CloudTrail across different regions to get a complete audit trail of activities across various services.
NOTE: This policy will be triggered only when you have CloudTrail configured in your AWS account and logging is disabled.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudTrail dashboard\n3. Click on 'Trails' (Left panel)\n4. Click on reported CloudTrail\n5. Enable 'Logging' by hovering logging button to 'ON'\nOR\nIf CLoudTrail is not required you can delete by clicking on the delete icon below the logging hover button.. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-account-summary' AND json.rule='not AccountAccessKeysPresent equals 0'``` | AWS Access key enabled on root account
This policy identifies root accounts for which access keys are enabled. Access keys are used to sign API requests to AWS. Root accounts have complete access to all your AWS services. If the access key for a root account is compromised, an unauthorized users will have complete access to your AWS account.
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console as the root user.\n2. Click root account name and on the top right select 'Security Credentials' from the dropdown.\n3. For each key in 'Access Keys', click on "X" to delete the keys.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case openshift and state equal ignore case normal and features.pullSecretApplied is false``` | IBM Cloud OpenShift cluster has Image pull secrets disabled
This policy identifies IBM Cloud OpenShift Clusters with image pull secrets disabled. If Image pull secrets feature Is disabled, it stores registry credentials to connect to container registry. It is recommended to enable image pull secrets feature, which will store an image pull secret for pulling images rather than using credentials.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To enable image pull secrets feature on a OpenShift cluster, refer \nfollowing URLs:\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-registry#imagePullSecret_migrate_api_key\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-registry#update-pull-secret. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = 'state equals RUNNABLE and databaseVersion contains SQLSERVER and (settings.databaseFlags[*].name does not contain "external scripts enabled" or settings.databaseFlags[?any(name contains "external scripts enabled" and value contains on)] exists)'``` | GCP SQL server instance database flag external scripts enabled is not set to off
This policy identifies GCP SQL server instances for which database flag 'external scripts enabled' is not set to off. Feature 'external scripts enabled' enables the execution of scripts with certain remote language extensions. When Advanced Analytics Services is installed, setup can optionally set this property to true. As the External Scripts Enabled feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system. It is recommended to set external scripts enabled database flag for Cloud SQL SQL Server instance to off.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported SQL server instance\n4. Click on EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance' section, go to 'Flags and parameters', click on 'ADD FLAG' in 'New database flag' section, choose the flag 'external scripts enabled' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Flags and parameters', choose the flag 'external scripts enabled' and set the value as 'off'\n6. Click on DONE\n7. Click on SAVE \n8. If 'Changes requires restart' pop-up appears, click on 'SAVE AND RESTART'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(23,23) or destinationPortRanges[*] contains _Port.inRange(23,23) ))] exists``` | Azure Network Security Group allows all traffic on Telnet (TCP Port 23)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Telnet (TCP Port 23). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict MySQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal\n2. Select 'All services'\n3. Select 'Network security groups', under Networking\n4. Select the Network security group you need to modify\n5. Select 'Inbound security rules' under Settings\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals Containers and properties.pricingTier does not equal Standard)] exists``` | Azure Microsoft Defender for Cloud set to Off for Containers
This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Containers set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. It is highly recommended to enable Azure Defender for Containers.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Containers' Select 'On' under Plan.\n8. Select 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case "Running" AND kind contains "functionapp" AND kind does not contain "workflowapp" AND kind does not equal "app" AND (identity.type does not exist or identity.principalId is empty)``` | Azure Function App doesn't have a Managed Service Identity
This policy identifies Azure Function App which doesn't have a Managed Service Identity. Managed service identity in Function App makes the app more secure by eliminating secrets from the app, such as credentials in the connection strings. When registering with Azure Active Directory in the app service, the app will connect to other Azure services securely without the need of username and passwords.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System-assigned' or 'User-assigned' managed identity based on your requirement.\n6. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = (metadataOptions.httpEndpoint does not exist) or (metadataOptions.httpEndpoint equals "enabled" and metadataOptions.httpTokens equals "optional") as X; config from cloud.resource where api.name = 'aws-describe-auto-scaling-groups' as Y; filter ' $.X.launchConfigurationName equal ignore case $.Y.launchConfigurationName'; show X;``` | AWS Auto Scaling group launch configuration not configured with Instance Metadata Service v2 (IMDSv2)
This policy identifies the autoscaling group launch configuration where IMDSv2 is set to optional. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. With IMDSv2, every request is now protected by session authentication. Version 2 of the IMDS adds new protections that weren't available in IMDSv1 to further safeguard your EC2 instances created by the autoscaling group. It is recommended to use only IMDSv2 for all your EC2 instances.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: You cannot modify a launch configuration after you create it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration with IMDSv2 enabled.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the 'Advanced details', go to the 'Metadata version' section.\n6. Select 'V2 only (token required)' option.\n7. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n8. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n9. Select the check box next to the Auto Scaling group.\n10. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n11. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n12. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n13. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances,\n\n1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Refer 'Configure instance metadata options for existing instances' section from the following URL: \nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-IMDS-existing-instances.html\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration.\n\nNOTE: Ensure adequate precautions before you enforce the use of IMDSv2, as applications or agents that use IMDSv1 for instance metadata access will break.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] contains microsoft-user-default-legacy``` | gvCopy of Azure AD Users can consent to apps accessing company data on their behalf is enabled
This policy identifies Azure Active Directory which have 'Users can consent to apps accessing company data on their behalf' configuration enabled. User profiles contain private information which could be shared with others without requiring any further consent from the user if this configuration is enabled. It is recommended not to allow users to use their identity outside of the cloud environment.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Active Directory'\n3. Select 'Users' under 'Manage'\n4. Go to 'User settings'\n5. Click on 'Manage how end users launch and view their applications' if not selected\n6. Under 'Enterprise applications' select 'No' for 'Users can consent to apps accessing company data on their behalf'\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = name does not start with "gke-" and status equals RUNNING and (networkInterfaces[*].accessConfigs exists or networkInterfaces.ipv6AccessConfigs exists)``` | GCP VM instance with the external IP address
This policy identifies GCP VM instances that are assigned a public IP.
Using a public IP with a GCP VM exposes it directly to the internet, increasing the risk of unauthorized access and attacks. This makes the VM vulnerable to threats such as brute force attempts, DDoS attacks, and other malicious activities. To mitigate these risks, it is safer to use private IPs and secure access methods like VPNs or load balancers.
It is recommended to avoid assigning public IPs to VM instances.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Compute Engine' and then 'VM instances'\n3. Click on the reported VM instance\n4. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue\n5. Once the the VM has been stopped, click on the 'EDIT' button\n6. Under 'Network interfaces', expand the network interface with the public external IP assigned\n7. Select 'IPv4 (single-stack)' under IP stack type\n8. Select 'None' under 'External IPv4 address'\n9. Click on 'Save'\n10. Click on 'START/RESUME' from the top menu. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and vpcoptions.vpcid does not exist``` | AWS Elasticsearch domain publicly accessible
This policy identifies Elasticsearch domains which are publicly accessible. Enabling VPCs for Elasticsearch domains provides flexibility and control over the clusters access with an extra layer of security than Elasticsearch domains that use public endpoints. It also keeps all traffic between your VPC and Elasticsearch domains within the AWS network instead of going over the public Internet.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: VPC for AWS Elasticsearch domain can be set only at the time of the creation of domain. So to resolve this alert, create a new domain with VPC, then migrate all required Elasticsearch domain data from the reported Elasticsearch domain to this newly created domain and delete reported Elasticsearch domain.\n\nTo set up the new ES domain with VPC, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html\nTo create Elasticsearch domain within VPC, In Network configuration choose VPC access instead of Public access.\n\nTo delete reported ES domain, refer the following URL:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-deleting.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-snapshots' AND json.rule = 'snapshot.state equals completed and createVolumePermissions[*].userId size != 0 and _AWSCloudAccount.isRedLockMonitored($.createVolumePermissions[*].userId) is false'``` | AWS EBS Snapshot with access for unmonitored cloud accounts
This policy identifies EBS Snapshot with access for unmonitored cloud accounts.The EBS Snapshots which have either the read / write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read / write privileges should be reviewed and confirmed that these are valid accounts of your organisation (or authorised by your organisation) and are not active under Prisma Cloud monitoring.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Access the EC2 service, navigate to 'Snapshots' under 'Elastic Block Store' in left hand menu.\n4. Select the identified 'EBS Snapshot' and select the tab 'Permissions'.\n5. Review and delete the AWS Accounts which should not have read access.. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = 'cannedACL equals PublicRead or cannedACL equals PublicReadWrite'``` | Alibaba Cloud OSS bucket accessible to public
This policy identifies Object Storage Service (OSS) buckets which are publicly accessible. Alibaba Cloud OSS allows customers to store and retrieve any type of content from anywhere on the web. Often, customers have legitimate reasons to expose the OSS bucket to the public, for example, to host website content. However, these buckets often contain highly sensitive enterprise data which if left open to the public may result in sensitive data leaks.
This is applicable to alibaba_cloud cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Access Control List (ACL)' Section, Click on 'Configure'\n5. For 'Bucket ACL' field, Choose 'Private' option\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = status equals Running and instanceChargeType equals PostPaid and deletionProtection is false``` | Alibaba Cloud ECS instance release protection is disabled
This policy identifies ECS instances for which release protection is disabled. Enabling release protection for these ECS instances prevents irreversible data loss resulting from accidental or malicious operations.
Note: This attribute applies to Pay-As-You-Go instances only. Release protection can only restrict the manual release operation and does not apply for release operation done by Alibaba Cloud.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Select the reported ECS instance, select More -> Instance Settings -> Change Release Protection Setting -> Release Protection (Toggle to enable)\n5. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action contains iam:CreatePolicyVersion or Action contains iam:SetDefaultPolicyVersion or Action contains iam:PassRole or Action contains iam:CreateAccessKey or Action contains iam:CreateLoginProfile or Action contains iam:UpdateLoginProfile or Action contains iam:AttachUserPolicy or Action contains iam:AttachGroupPolicy or Action contains iam:AttachRolePolicy or Action contains iam:PutUserPolicy or Action contains iam:PutGroupPolicy or Action contains iam:PutRolePolicy or Action contains iam:AddUserToGroup or Action contains iam:UpdateAssumeRolePolicy or Action contains iam:*))] exists``` | aws-test-policy
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' AND json.rule = kmsKeyId is member of ("null")``` | OCI Block Storage Block Volumes are not encrypted with a Customer Managed Key (CMK)
This policy identifies the OCI Block Storage Volumes that are not encrypted with a Customer Managed Key (CMK). It is recommended that Block Storage Volumes should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the Block Storage Volume.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign. |
```config from cloud.resource where api.name = 'gcloud-essential-contacts-organization-contact' AND json.rule = notificationCategorySubscriptions[] contains "ALL" or (notificationCategorySubscriptions[] contains "LEGAL" and notificationCategorySubscriptions[] contains "SECURITY" and notificationCategorySubscriptions[] contains "SUSPENSION" and notificationCategorySubscriptions[] contains "TECHNICAL" and notificationCategorySubscriptions[] contains "TECHNICAL_INCIDENTS") as X; count(X) less than 1``` | GCP Organization not configured with essential contacts
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/delete" as X; count(X) less than 1``` | chao test change saved search
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createidpgroupmapping and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteidpgroupmapping and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateidpgroupmapping) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for Identity Provider Group (IdP) group mapping changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Identity Provider Group Mappings (IdP) changes. Monitoring and alerting on changes to IdP group mapping will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity Provider Group Mappings (IdP).
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Idp Group Mapping – Create, Idp Group Mapping – Delete and Idp Group Mapping – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = securityRules[?any( direction equals INGRESS and (isStateless does not exist or isStateless is false) )] exists``` | OCI Network Security Groups (NSG) has stateful security rules
This policy identifies the OCI Network Security Groups (NSG) security rules that have stateful ingress rules configured. It is recommended that Network Security Groups (NSG) security rules are configured with stateless ingress rules to slow the impact of a denial-of-service (DoS) attack.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Select the security rule from Security rules pane where Stateless is set to No and Direction set to Ingress\n5. Click on Edit\n6. Select the checkbox STATELESS\n7. Click on Save Changes. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty``` | build information
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_attack_path_policy_as_child_policies_ss_finding_1
Description-49e9b494-9bab-4e02-ad26-c6ac7731d570
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = default is true and shared is false and state equal ignore case available as X; config from cloud.resource where api.name = 'aws-ec2-describe-network-interfaces' AND json.rule = status equal ignore case in-use as Y; filter '$.X.vpcId equals $.Y.vpcId'; show X;``` | AWS Default VPC is being used
This policy identifies AWS Default VPCs that are being used.
AWS creates a default VPC automatically upon the creation of your AWS account with a default security group and network access control list (NACL). Using AWS default VPC can lead to limited customization and security concerns due to shared resources and potential misconfigurations, hindering scalability and optimal resource management.
As a best practice, using a custom VPC with specific security and network configuration provides greater flexibility and control over your architecture.
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: It is recommended to remove association with the default VPC and create a new custom VPC configuration based on your security and networking requirements, and associate the resource back to a newly created custom VPC.\n\nTo create a new VPC, follow below URL:\nhttps://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html\n\nTo remove the default VPC, follow below URL:\nhttps://docs.aws.amazon.com/vpc/latest/userguide/delete-vpc.html\n\nNOTE: Before any modification identify and analyze the potential results of a change in the environment.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_min_duration_statement')] does not exist or settings.databaseFlags[?(@.name=='log_min_duration_statement')].value does not equal -1)"``` | GCP PostgreSQL instance database flag log_min_duration_statement is not set to -1
This policy identifies PostgreSQL database instances in which database flag log_min_duration_statement is not set to -1. The log_min_duration_statement flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Logging SQL statements may include sensitive information that should not be recorded in logs. So it is recommended to set log_min_duration_statement flag value to -1 so that execution statements logging will be disabled.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_min_duration_statement' from the drop-down menu and set the value as '-1'\nOR\nIf the flag has been set to other than -1, Under 'Configuration options', In 'Flags' section choose the flag 'log_min_duration_statement' and set the value as '-1'\n6. Click Save. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = 'dnssecConfig.defaultKeySpecs[*].keyType contains zoneSigning and dnssecConfig.defaultKeySpecs[*].algorithm contains rsasha1'``` | GCP Cloud DNS zones using RSASHA1 algorithm for DNSSEC zone-signing
This policy identifies the GCP Cloud DNS zones which are using the RSASHA1 algorithm for DNSSEC zone-signing. DNSSEC is a feature of the Domain Name System that authenticates responses to domain name lookups and also prevents attackers from manipulating or poisoning the responses to DNS requests. So the algorithm used for key signing should be recommended one and it should not be weak.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Currently, DNSSEC zone-signing can be updated using command line interface only.\n1. If you need to change the settings for a managed zone where it has been enabled, you have to turn DNSSEC off and then re-enable it with different settings. To turn off DNSSEC, run following command:\ngcloud dns managed-zones update <ZONE_NAME> --dnssec-state off\n2. To update zone-signing for a reported managed DNS Zone, run following command:\ngcloud dns managed-zones update <ZONE_NAME> --dnssec-state on --ksk-algorithm <KSK_ALGORITHM> --ksk-key-length <KSK_KEY_LENGTH> --zsk-algorithm <ZSK_ALGORITHM> --zsk-key-length <ZSK_KEY_LENGTH> --denial-of-existence <DENIAL_OF_EXISTENCE>. |
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration contains $.Y.name) and ($.Y.EncryptionConfiguration.EnableAtRestEncryption is true) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration exists) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode contains CSE) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.S3EncryptionConfiguration.EncryptionMode does not contain Custom)' ; show X;``` | AWS EMR cluster is not configured with CSE CMK for data at rest encryption (Amazon S3 with EMRFS)
This policy identifies EMR clusters which are not configured with Client Side Encryption with Customer Master Keys(CSE CMK) for data at rest encryption of Amazon S3 with EMRFS. As a best practice, use Customer Master Keys (CMK) to encrypt the data in your EMR cluster and ensure full control over your data.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. For encryption At Rest click the checkbox for 'Enable at-rest encryption for EMRFS data in Amazon S3'.\n8. From the dropdown 'Default encryption mode’ select 'CSE-Custom'. Follow below link for configuration steps.\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button\n10. On the left menu of EMR dashboard Click 'Clusters'\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.. |
Subsets and Splits