query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.kind equals "compute#metadata" and commonInstanceMetadata.items[?any(key contains "enable-oslogin" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and commonInstanceMetadata.items[?any(key contains "ssh-keys")] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and ( metadata.items[?any(key exists and key contains "block-project-ssh-keys" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and metadata.items[?any(key exists and key contains "enable-oslogin" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and name does not start with "gke-") as Y; filter '$.Y.zone contains $.X.name'; show Y;``` | GCP VM instances have block project-wide SSH keys feature disabled
This policy identifies VM instances which have block project-wide SSH keys feature disabled. Project-wide SSH keys are stored in Compute/Project-metadata. Project-wide SSH keys can be used to login into all the instances within a project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within a project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit button\n6. Under SSH Keys section, Check 'Block project-wide SSH keys' on the checkbox\n7. Click on Save. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'maxPasswordAge !isType Integer or maxPasswordAge > 90 or maxPasswordAge equals 0'``` | Alibaba Cloud RAM password policy does not expire in 90 days
This policy identifies Alibaba Cloud accounts for which do not have password expiration set to 90 days or less. As a best practice, change your password every 90 days or sooner to ensure secure access to the Alibaba Cloud console.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Validity Period' field, enter 90 or less based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='log_checkpoints')].properties.value equals OFF or configurations.value[?(@.name=='log_checkpoints')].properties.value equals off"``` | Azure PostgreSQL database server with log checkpoints parameter disabled
This policy identifies PostgreSQL database servers for which server parameter is not set for log checkpoints. Enabling log_checkpoints helps the PostgreSQL Database to Log each checkpoint in turn generates query and error logs. However, access to transaction logs is not supported. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_checkpoints' and set it to on\n6. Click on 'Save' button from top menu to save the change.. |
```config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' AND json.rule = status does not equal "Terminated" as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-configuration-settings' AND json.rule = configurationSettings[*].optionSettings[?any( optionName equals "StreamLogs" and value equals "false" )] exists as Y; filter ' $.X.environmentName equals $.Y.configurationSettings[*].environmentName and $.X.applicationName equals $.Y.configurationSettings[*].applicationName'; show X;``` | AWS Elastic Beanstalk environment logging not configured
This policy identifies the Elastic Beanstalk environments not configured to send logs to CloudWatch Logs.
An Elastic Beanstalk environment is a configuration of AWS resources where you can deploy your application. The environment logs refer to the logs generated by various components of your application, which can provide valuable insights into any errors or issues that may arise during operation. Failing to enable logging in an Elastic Beanstalk environment reduces visibility, hinders incident detection and response, and increases vulnerability to security breaches.
It is recommended to configure AWS Elastic Beanstalk environments to send logs to CloudWatch to ensure security and meet compliance requirements.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To stream Elastic Beanstalk environment logs to CloudWatch Logs,\n1. Sign in to the AWS console.\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Go to 'Elastic Beanstalk' service.\n4. In the navigation pane, choose 'Environments', then select the reported environment's name from the list.\n5. In the navigation pane, choose Configuration.\n6. In the 'Updates, monitoring, and logging' configuration category, choose Edit.\n7. Under 'Instance log streaming to CloudWatch Logs', Enable Log streaming by selecting the 'Activated' checkbox.\n8. Set 'Retention' to the number of days to save the logs.\n9. Select the 'Lifecycle' setting that determines whether the logs are saved after the environment is terminated according to your business requirements.\n10. To save the changes choose 'Apply' at the bottom of the page.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals username_password and state_description equal ignore case active and (_DateTime.ageInDays(last_update_date) > 90)'``` | IBM Cloud Secrets Manager user credentials have aged more than 90 days without being rotated
This policy identifies IBM Cloud Secrets Manager user credentials that have aged more than 90 days without being rotated. User credentials should be rotated to ensure that data cannot be accessed with an old password which might have been lost, cracked, or stolen. It is recommended that user credentials are regularly rotated.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the secret manager instance in which the reported secret resides, under security section.\n3. Select the secret and click on 'Actions' dropdown.\n4. Select 'Rotate' from the dropdown.\n5. In the 'Rotate secret' screen, provide data as required.\n6. Click on 'Rotate'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains SQLSERVER and settings.databaseFlags[?(@.name=='cross db ownership chaining')].value equals on"``` | GCP SQL Server instance database flag 'cross db ownership chaining' is enabled
This policy identifies GCP SQL Server instance database flag 'cross db ownership chaining' is enabled. Enabling cross db ownership is not recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you are aware of the security implications of this setting.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the SQL Server instance for which you want to disable the database flag from the list\n4. Click 'Edit'\n5. Go to 'Flags and Parameters' under 'Configuration options' section\n6. Search for the flag 'cross db ownership chaining' and set the value 'off'\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-insights-component' AND json.rule = properties.provisioningState equals Succeeded and (properties.DisableLocalAuth does not exist or properties.DisableLocalAuth is false)``` | Azure Application Insights not configured with Azure Active Directory (Azure AD) authentication
This policy identifies Application Insights that are not configured with Azure Active Directory (AAD) authentication and are enabled with local authentication.
Disabling local authentication and using AAD-based authentication enhances the security and reliability of the telemetry used to make both critical operational and business decisions.
It is recommended to configure the Application Insights with Azure Active Directory (AAD) authentication so that all actions are strongly authenticated.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure Azure Active Directory (AAD) authentication and disable local authentication on existing Application Insights, follow the below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-ad-authentication. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = kind equal ignore case OpenAI and properties.provisioningState equal ignore case Succeeded and (properties.restrictOutboundNetworkAccess does not exist or properties.restrictOutboundNetworkAccess is false or (properties.restrictOutboundNetworkAccess is true and properties.allowedFqdnList is empty))``` | Azure Cognitive Services account hosted with OpenAI is not configured with data loss prevention
This policy identifies Azure Cognitive Services accounts hosted with OpenAI that are not configured with data loss prevention.
Azure AI services offer data loss prevention capabilities that allow customers to configure the list of outbound URLs their Azure AI services resources can access.
As a best practice, it is recommended to enable the data loss prevention feature in OpenAI-hosted Azure Cognitive Services accounts to prevent data loss.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable data loss prevention on existing Azure Cognitive Services account hosted with OpenAI, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/cognitive-services-data-loss-prevention?tabs=azure-cli#enabling-data-loss-prevention. |
```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-pipeline-job' as Y; filter ' $.Y.runtimeConfig.gcsOutputDirectory contains $.X.id '; show X;``` | GCP Storage Bucket storing GCP Vertex AI pipeline output data
This policy identifies publicly exposed GCS buckets that are used to store GCP Vertex AI pipeline output data.
GCP Vertex AI pipeline output data is stored in the Storage Bucket. This output data is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per the organization's security and compliance requirements.
It is recommended to monitor, identify, and evaluate storage location for GCP Vertex AI pipeline output data regularly to prevent unauthorized access and AI model thefts.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Review and validate the GCP Vertex AI pipeline output data is stored in the right Storage bucket. Move and/or delete the output data if it is found in an unexpected location. Review how the Vertex AI pipeline was configured to output to an unauthorised/unapproved storage bucket.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size greater than 0 and volume_attachments[*].type equals boot and encryption equal ignore case provider_managed``` | IBM Cloud OS disk is not encrypted with customer managed keys
This policy identifies IBM Cloud OS disk attached to a virtual server instance which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: A os disk(boot storage volume) can be encrypted with customer managed keys only at the time of creation of virtual server instance. Please\ncreate a snapshot for reported os data disk following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a virtual service instance with os disk from the above created snapshot with customer managed encryption:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-vol-ui\n\nOnce new virtual server instance got created, delete the virtual server instance to which reported os disk got attached:\nhttps://cloud.ibm.com/docs/hp-virtual-servers?topic=hp-virtual-servers-remove_vs#delete_vs\n\nNote: Please note deleting a virtual server instance is irreversible make sure to backup any required data.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-log-analytics-workspace' AND json.rule = properties.provisioningState equals Succeeded and (properties.publicNetworkAccessForQuery equals Enabled or properties.publicNetworkAccessForIngestion equals Enabled)``` | Azure Log Analytics workspace configured with overly permissive network access
This policy identifies Log Analytics workspaces configured with overly permissive network access.
Virtual networks access configuration in Log Analytics workspace allows you to restrict data ingestion and queries coming from the public networks.
It is recommended to configure the Log Analytics workspace with virtual networks access configuration set to restrict, so that the Log Analytics workspace is accessible only to restricted Azure Monitor Private Link Scopes.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Log Analytics workspaces dashboard \n3. Click on the reported Log Analytics workspace\n4. Under the 'Settings' menu, click on 'Network Isolation'\n5. Create a Azure Monitor Private Link Scope if it is not already created by refering:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/logs/private-link-configure#create-an-azure-monitor-private-link-scope\n6. After creating, Under 'Virtual networks access configuration', \nSet 'Accept data ingestion from public networks not connected through a Private Link Scope' to 'No' and \nSet 'Accept queries from public networks not connected through a Private Link Scope' to 'No'\n7. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = tags[*] exists``` | Izabella config with tags test 1
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-virtual-desktop-workspace' AND json.rule = diagnostic-settings[?none( properties.logs[?any( enabled is true )] exists )] exists``` | Azure Virtual Desktop workspace diagnostic log is disabled
This policy identifies Azure Virtual Desktop workspaces where diagnostic logs are not enabled.
Diagnostic logs are vital for monitoring and troubleshooting Azure Virtual Desktop, which offers virtual desktops and remote app services. They help detect and resolve issues, optimize performance, and meet security and compliance standards. Without these logs, it’s difficult to track activities and detect anomalies, potentially jeopardizing security and efficiency.
As a best practice, it is recommended to enable diagnostic logs for Azure Virtual Desktop workspaces.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Azure Virtual Desktop'\n2. Select 'Azure Virtual Desktop'\n3. Under 'Manage' select 'Workspaces'\n4. Select the reported Workspace\n5. Under 'Monitoring' select 'Diagnostic settings'\n6. Under Diagnostic settings tab. Click on '+ Add diagnostic setting' to create a new Diagnostic Setting\n7. Specify a 'Diagnostic settings name'\n8. Under section 'Categories', select the type of log that you want to enable\n9. Under section 'Destination details'\n a. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\n b. If you set 'Archive to storage account', select the 'Subscription' and 'Storage account'\n c. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n10. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[?any(properties.email is empty and alertNotifications equal ignore case Off)] exists) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists``` | Azure Microsoft Defender for Cloud security alert email notifications is not set
This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which have not set security alert email notifications. Enabling security alert emails ensures that security alert emails are received from Microsoft. This ensures that the right people are aware of any potential security issues and are able to mitigate the risk.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. Under 'Notification types', check the check box next to Notify about alerts with the following severity (or higher): and select High from the drop down menu\n7. Select 'Save'. |
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equal ignore case "Microsoft.Keyvault" as X; config from cloud.resource where api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists as Y; filter '$.Y.properties.vaultUri contains $.X.properties.encryption.keyvaultproperties.keyvaulturi'; show X;``` | Azure Storage account encryption key is not rotated regularly
This policy identifies Azure Storage accounts which are encrypted by an encryption key that is not rotated regularly. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure Stroage account encryption key rotation; refer below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-configure-existing-account?tabs=azure-portal#configure-encryption-for-automatic-updating-of-key-versions\n\nNOTE: Azure Storage checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_ss_finding_1
Description-d63012c8-3c89-4ac2-ac4f-6c6523921d5f
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus equals available and engine equals aurora-postgresql and engineVersion is member of ('10.11','10.12','10.13','11.6','11.7','11.8')``` | AWS Aurora PostgreSQL exposed to local file read vulnerability
This policy identifies AWS Aurora PostgreSQL which are exposed to local file read vulnerability. AWS Aurora PostgreSQL installed with vulnerable 'log_fdw' extension is exposed to local file read vulnerability, due to which attacker could gain access to local system files of the database instance within their account, including a file which contained credentials specific to Aurora PostgreSQL. It is highly recommended to upgrade AWS Aurora PostgreSQL to the latest version.
For more information,
https://aws.amazon.com/security/security-bulletins/AWS-2022-004/
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Amazon has deprecated affected versions of Aurora PostgreSQL and customers can no longer create new instances with the affected versions.\n\nTo upgrade the latest version of Amazon Aurora PostgreSQL, please follow below URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html\n. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-firehose-delivery-stream' AND json.rule = deliveryStreamEncryptionConfiguration exists and deliveryStreamEncryptionConfiguration.status equals DISABLED``` | AWS Kinesis Firehose with Direct PUT as source has SSE encryption disabled
This policy identifies Amazon Kinesis Firehose with Direct PUT as source which has Server-side encryption (SSE) encryption disabled. Enabling Server Side Encryption allows you to meet strict regulatory requirements and enhance the security of your data at rest. As a best practice, enable SSE for the Amazon Kinesis Firehose.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Amazon Kinesis Service\n3. Click on 'Delivery streams'\n4. Select the reported Kinesis Firehose for the corresponding region\n5. Click on 'Configuration' tab\n6. Under Server-side encryption, Click on Edit\n7. Choose 'Enable server-side encryption for source records in delivery stream'\n8. Under 'Encryption type' select 'Use AWS owned CMK'\n9. Click 'Save changes'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState equal ignore case running AND (platformConfig does not exist OR platformConfig equal ignore case "null" OR platformConfig.isSecureBootEnabled is false)``` | OCI Compute Instance with Secure Boot disabled
This policy identifies OCI compute instances in which Secure Boot is disabled.
Secure Boot serves as a security standard ensuring that a machine exclusively boots using Original Equipment Manufacturer (OEM) trusted software. Without the activation of Secure Boot, a compute instance becomes susceptible to booting unauthorized or malicious software, posing a threat to the integrity and security of the instance. Consequently, this vulnerability can lead to unauthorized access, data breaches, or other malicious activities within the instance.
As a security best practice, enabling Secure Boot on all compute instances is strongly recommended to guarantee the exclusive execution of trusted software during the boot process.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: Secure Boot can only be enabled during resource creation. To fix this, you must terminate the reported instance and create a new one with Secure Boot enabled.\n\n1. Log in to the OCI Console.\n2. Switch to the Region of the reported resource from the Region drop-down in top-right corner.\n3. Type the reported compute instance name into the Search box at the top of the Console.\n4. Click on the reported compute instance from the search results.\n5. Click 'Terminate' to terminate the instance (decide whether to permanently delete the instance's attached boot volume).\n6. To recreate the compute instance with Secure Boot enabled, navigate to the instance creation page.\n7. Click 'Create Instance'.\n8. In the 'Image and Shape' section, select an Image and Shape that support Shielded Instance configuration, indicated by the shield icon.\n9. In the 'Security' section, click 'Edit'.\n10. Enable 'Shielded Instance', then activate the 'Secure Boot' toggle.\n11. Complete the remaining details as required.\n12. Click 'Create'.. |
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND resource.status = Active AND json.rule = tags[*].key none equal "application" AND tags[*].key none equal "Application"``` | pcsup-aws-policy
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and serviceConfig.vpcConnector does not exist``` | GCP Cloud Function not enabled with VPC connector for network egress
This policy identifies GCP Cloud Functions that are not enabled with a VPC connector for network egress. This includes both Cloud Functions v1 and Cloud Functions v2.
Using a VPC connector for network egress in GCP Cloud Functions is crucial to prevent security risks such as data interception and unauthorized access. This practice strengthens security by allowing safe communication with private resources, enhancing traffic monitoring, reducing the risk of data leaks, and ensuring compliance with security policies.
Note: For a Cloud Function to access public traffic using Serverless VPC Connector, Cloud NAT might be needed.
Link: https://cloud.google.com/functions/docs/networking/network-settings#route-egress-to-vpc
It is recommended to configure GCP Cloud Functions with a VPC connector.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings’ drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. Under Section 'Egress settings', select a VPC connector from the dropdown\n8. In case VPC connector is not available, either select 'Custom' and provide the name of the VPC Connector manually or click on 'Create a Serverless VPC Connector' and follow the link to create a Serverless VPC connector: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access\n9. Once the Serverless VPC connector is available, select it from the dropdown\n10. Select 'Route only requests to private IPs through the VPC connector' or 'Route all traffic through the VPC connector' as per your organization's policies.\n10. Click on 'NEXT'\n11. Click on 'DEPLOY'. |
```config from cloud.resource where api.name = 'alibaba-cloud-ecs-instance' as X; config from cloud.resource where api.name = 'alibaba-cloud-ecs-security-group' as Y; filter "$.X.publicIpAddress[*] is not empty and $.X.securityGroupIds[*] contains $.Y.securityGroupId and $.Y.permissions[?(@.policy=='Accept' && @.direction=='ingress')].sourceCidrIp contains 0.0.0.0/0"; show X;``` | Alibaba Cloud ECS instance that has a public IP address and is attached to a security group with internet access
This policy identifies ECS instances that have a public IP address and are attached to security groups with internet access. Because an ECS instance receives a public IP address at the launch, by default, as a best practice ensure that the instance is attached to a security group which is not overly permissive.
This is applicable to alibaba_cloud cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Instead of using a public IP address for the ECS instance, either associate an Elastic IP address to it or evaluate the rules for the security groups to ensure restricted access.\n\nTo allocate an Elastic IP address, follow the instructions below:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. Select the reported ECS instance\n4. Choose More > Network and Security Group > Convert to EIP\n5. On 'Convert to EIP' popup window, click on 'OK'\n\nTo restrict Security Groups allowing all traffic, follow the instructions below:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. Click on the reported ECS instance\n3. In the left-side navigation pane, choose Security Groups\n4. Check the rules of each security group by clicking on 'Add Rules' in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow and 'Authorization Object' as 0.0.0.0/0, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'``` | Chao Copy of Critical - AWS S3 Object Versioning is disabled
This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3.
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-credential-report' AND json.rule = 'user equals "<root_account>" and ( _DateTime.ageInDays(access_key_1_last_used_date) < 14 or _DateTime.ageInDays(access_key_2_last_used_date) < 14 or _DateTime.ageInDays(password_last_used) < 14 )'``` | AWS root account activity detected in last 14 days
This policy identifies if AWS root account activity was detected within the last 14 days.
The AWS root account user is the primary administrative identity associated with an AWS account, providing complete access to all AWS services and resources. Since the root user has complete access to the account, adopting the principle of least privilege is important to lower the risk of unintentional disclosure of highly privileged credentials and inadvertent alterations. It's also advised to remove the root user access keys and restrict the use of the root user, refraining from using them for routine or administrative duties.
It is recommended to restrict the use of the AWS root account.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: If any access keys are created for the root account, please delete the keys using the following steps:\n\n1. Sign in to AWS Console as the root user.\n2. Click the root account name and on the top right select 'Security Credentials' from the dropdown.\n3. For each key in 'Access Keys', click on 'X' to delete the keys.\n\nLimiting root user console access as much as feasible is advised.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.createvcn and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletevcn and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatevcn) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for VCN changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Virtual Cloud Networks (VCN) changes. Monitoring and alerting on changes to VCN will help in identifying changes to the security posture. It is recommended that a Event Rule and Notification be configured to catch changes made to Virtual Cloud Networks (VCN).
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting VCN – Create, VCN - Delete and VCN – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code equals active and type equals "network" and listeners[?any(protocol equals TLS and sslPolicy exists and sslPolicy does not contain ELBSecurityPolicy-TLS13-1-2-2021-06)] exists``` | AWS Network Load Balancer (NLB) is not using the latest predefined security policy
This policy identifies Network Load Balancers (NLBs) which are not using the latest predefined security policy. A security policy is a combination of protocols and ciphers. The protocol establishes a secure connection between a client and a server and ensures that all data passed between the client and your load balancer is private. A cipher is an encryption algorithm that uses encryption keys to create a coded message. So it is recommended to use the latest predefined security policy which uses only secured protocol and ciphers.
For more details:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On the 'Listeners' tab, Choose the 'TLS' rule\n6. Click on 'Edit', Change 'Security policy' to 'ELBSecurityPolicy-TLS13-1-2-2021-06'\n7. Click on 'Update' to save your changes. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'deleteProtection equals off'``` | Alibaba Cloud SLB delete protection is disabled
This policy identifies Server Load Balancers (SLB) for which delete protection is disabled. Enabling delete protection for these SLBs prevents irreversible data loss resulting from accidental or malicious operations.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Select the reported ECS instance, select More > Manage\n4. In the Instance Details tab, Slide the 'Deletion Protection' button to green.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and backendSets.* is not empty and backendSets.*.sslConfiguration.certificateName is empty``` | OCI Load balancer backend set not configured with SSL certificate
This policy identifies Load balancers for which the backend set is not configured with an SSL certificate.
Without an SSL certificate, data transferred between the load balancer and backend servers is not encrypted, making it vulnerable to interception and attacks. Proper SSL configuration ensures data integrity and privacy, protecting sensitive information from unauthorized access.
As a best practice, it is recommended to implement SSL between the load balancer and your backend servers so that traffic between the load balancer and the backend servers is encrypted.
This is applicable to oci cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure SSL to your Load balancer backend set follow below URLs details:\nFor adding certificate - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingcertificates.htm#configuringSSLhandling\nFor editing backend set - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managingbackendsets.htm#UpdateBackendSet. |
```config from cloud.resource where api.name = 'aws-ec2-describe-network-acls' AND json.rule = associations[*] size less than 1``` | AWS Network ACL is not in use
This policy identifies AWS Network ACLs that are not in use.
AWS Network Access Control Lists (NACLs) serve as a firewall mechanism to regulate traffic flow within and outside VPC subnets. A recommended practice is to assign NACLs to specific subnets to effectively manage network traffic. Unassigned NACLs with inadequate rules might inadvertently get linked to subnets, posing a security risk by potentially allowing unauthorized access.
It is recommended to regularly review and remove unused and inadequate NACLs to improve security, network performance, and resource management.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To attach an AWS Network Access Control List (NACL) to a subnet, follow these steps: \n\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section. \n3. Select the NACL that you want to attach to a subnet. \n4. Choose the 'Actions' button, then select 'Edit subnet associations'. \n5. In the 'Edit subnet associations' dialogue box, select the subnet(s) that you want to associate with the NACL. \n6. Choose 'Save' to apply the changes. \n\nTo delete a non-default AWS Network Access Control List (NACL), follow these steps: \n\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section. \n3. Select the NACL that you want to delete. \n4. Choose the 'Actions' button, then select 'Delete network ACL'. \n5. Confirm the deletion when prompted.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' AND json.rule = 'name equals default'``` | GCP project is using the default network
This policy identifies the projects which have default network configured. It is recommended to use network configuration based on your security and networking requirements, you should create your network and delete the default network.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1.Login to GCP Portal\n2.Goto VPC network (Left panel)\n3.Click on the reported default network\n4.Click on 'DELETE VPC NETWORK'\n5.Create a new VPC network according to your requirement\nMore info: https://cloud.google.com/vpc/docs/vpc#firewall_rules. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'``` | BikramTest AWS S3 Object Versioning is disabled
This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save. |
```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-bedrock-custom-model' as Y; filter ' $.Y.trainingDataConfig.bucketName equals $.X.bucketName'; show X;``` | AWS S3 bucket is utilized for AWS Bedrock Custom model training data
This policy identifies the AWS S3 bucket utilized for AWS Bedrock Custom model training job data.
S3 buckets store the datasets required for training Custom models in AWS Bedrock. Proper configuration and access control are essential to ensure the security and integrity of the training data. Improperly configured S3 buckets used for AWS Bedrock Custom model training data can lead to unauthorized access, data breaches, and potential loss of sensitive information.
It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Bedrock Custom model training data and ensure compliance.
NOTE: This policy is designed to identify the S3 buckets utilized for training custom models in AWS Bedrock. It does not signify any detected misconfiguration or security risk.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the AWS Bedrock Custom model training job data, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-public-access-block' AND json.rule = blockPublicAccessConfiguration.blockPublicSecurityGroupRules is false``` | AWS EMR Block public access setting disabled
This policy identifies AWS EMR which has a disabled block public access setting. AWS EMR block public access prevents a cluster in a public subnet from launching when any security group associated with the cluster has a rule that allows inbound traffic from the internet, unless the port has been specified as an exception. It is recommended to enable AWS EMR Block public access in each AWS Region for your AWS account.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Refer to the following URL to configure AWS EMR Block public access:\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-block-public-access.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals "ACTIVE" and metadata.notebook-upgrade-schedule does not exist``` | GCP Vertex AI Workbench user-managed notebook auto-upgrade is disabled
This policy identifies GCP Vertex AI Workbench user-managed notebooks that have auto-upgrade disabled.
Auto-upgrading Google Cloud Vertex environments ensures timely security updates, bug fixes, and compatibility with APIs and libraries. It reduces security risks associated with outdated software, enhances stability, and enables access to new features and optimizations.
It is recommended to enable auto-upgrade to minimize maintenance overhead and mitigate security risks.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Under 'Vertex AI', navigate to the 'Workbench' (Left Panel)\n3. Select 'USER-MANAGED NOTEBOOKS' tab\n4. Click on the reported notebook\n5. Go to 'SYSTEM' tab\n6. Enable 'Environment auto-upgrade'\n7. Configure upgrade schedule as required\n8. Click 'SUBMIT'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals "RUNNABLE" and deletionProtectionEnabled is false``` | GCP SQL database instance deletion protection is disabled
This policy identifies GCP SQL database instances that have deletion protection disabled.
Enabling instance deletion protection on GCP SQL databases is crucial for preventing accidental data loss, especially in production environments where an unintended deletion could disrupt services and impact business continuity. Deletion protection adds an extra safeguard, requiring intentional action to disable the setting before deletion, helping teams avoid costly downtime and ensuring the availability of essential data.
It is recommended to enable deletion protection on GCP SQL database instances to prevent accidental deletion.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'SQL' service\n3. Click on the name of the SQL instance on which alert is generated\n4. Click 'EDIT' at top\n5. Expand 'Data Protection'\n6. Check 'Enable deletion protection'\n7. Click 'Save' at bottom. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-appsync-graphql-api' AND json.rule = wafWebAclArn does not exist``` | AWS AppSync not configured with AWS Web Application Firewall v2 (AWS WAFv2)
This policy identifies AWS AppSync which is not configured with AWS Web Application Firewall. As a best practice, enable the AWS WAF service on AppSync to protect against application layer attacks. To block malicious requests to your AppSync, define the block criteria in the WAF web access control list (web ACL).
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure AppSync with AWS WAF, follow the below URL:\nhttps://docs.aws.amazon.com/appsync/latest/devguide/WAF-Integration.html. |
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.EnableInTransitEncryption is false)' ; show X;``` | AWS EMR cluster is not enabled with data encryption in transit
This policy identifies AWS EMR clusters which are not enabled with data encryption in transit. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and storage server. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your EMR clusters and their associated storage systems.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1.Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Data in transit encryption', check the box 'Enable in-transit encryption'.\n8. From the dropdown of 'TLS certificate provider’ select the appropriate certificate provider type and follow below link to create them.\n Reference: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-encryption-enable.html\n9. Click on 'Create' button.\n10. On the left menu of EMR dashboard Click 'Clusters'.\n11. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n12. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n13. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n14. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n15. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it.\n16. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n17. Click on the 'Terminate' button from the top menu.\n18. On the 'Terminate clusters' pop-up, click 'Terminate'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.httpsOnly equals false'``` | Azure App Service Web app doesn't redirect HTTP to HTTPS
Azure Web Apps allows sites to run under both HTTP and HTTPS by default. Web apps can be accessed by anyone using non-secure HTTP links by default. Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. It is recommended to enforce HTTPS-only traffic.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under the Settings section, Click on 'Configuration'\n5. In 'General Settings', under 'Platform settings' Set 'HTTPS Only' to 'On'. |
```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = '(eventSelectors[*].readWriteType contains All and eventSelectors[*].includeManagementEvents equal ignore case true) or (advancedEventSelectors[*].fieldSelectors[*].equals contains "Management" and advancedEventSelectors[*].fieldSelectors[*].field does not contain "readOnly" and advancedEventSelectors[*].fieldSelectors[*].field does not contain "eventSource")' as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1``` | AWS CloudTrail is not enabled with multi trail and not capturing all management events
This policy identifies the AWS accounts which do not have a CloudTrail with multi trail enabled and capturing all management events. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to turn on CloudTrail across different regions to get a complete audit trail of activities across various services.
NOTE: If you have Organization Trail enabled in your account, this policy can be disabled, or alerts generated for this policy on such an account can be ignored; as Organization Trail by default enables trail log for all accounts under that organization.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Refer to the following link to create/update the trail:\nhttps://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html\n\nRefer to the following link for more info on logging management events:\nLogging management events - AWS CloudTrail. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.diskEncryptionMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'``` | Azure Microsoft Defender for Cloud disk encryption monitoring is set to disabled
This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have disk encryption monitoring set to disabled. Enabling disk encryption for virtual machines will secure the data by encrypting it. It is recommended to set disk encryption monitoring in Microsoft Defender for Cloud security policy.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='log_retention_days')].properties.value less than 4"``` | Azure PostgreSQL database server log retention days is less than or equals to 3 days
This policy identifies PostgreSQL database servers which have log retention days less than or equals to 3 days. Enabling log_retention_days helps PostgreSQL database server to Sets number of days a log file is retained which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. For 'log_retention_days', enter value in range 4-7 (inclusive) and click on 'Save' button.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist``` | Patch 21.11.1 - RLP-83104 - Copy of Critical of AWS S3 bucket publicly readable
This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public.
For more details:
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'nodeConfig.imageType does not exist or nodeConfig.imageType does not start with COS'``` | GCP Kubernetes Engine Clusters not using Container-Optimized OS for Node image
This policy identifies Kubernetes Engine Clusters which do not have a container-optimized operating system for node image. Container-Optimized OS is an operating system image for your Compute Engine VMs that is optimized for running Docker containers. By using Container-Optimized OS for node image, you can bring up your Docker containers on Google Cloud Platform quickly, efficiently, and securely. The Container-Optimized OS node image is based on a recent version of the Linux kernel and is optimized to enhance node security. It is also regularly updated with features, security fixes, and patches. The Container-Optimized OS image provides better support, security, and stability than other images.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under Node Pools, For Node image click on 'Change'\n6. Choose 'Container-Optimized OS (cos)' \n7. Click on Change. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = releaseChannel.channel does not exist``` | GCP Kubernetes Engine cluster not using Release Channel for version management
This policy identifies GCP Kubernetes Engine clusters that are not using Release Channel for version management. Subscribing to a specific release channel reduces version management complexity.
The Regular release channel upgrades every few weeks and is for production users who need features not yet offered in the Stable channel. These versions have passed internal validation, but don't have enough historical data to guarantee their stability. Known issues generally have known workarounds.
The Stable release channel upgrades every few months and is for production users who need stability above all else, and for whom frequent upgrades are too risky. These versions have passed internal validation and have been shown to be stable and reliable in production, based on the observed performance of those clusters.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. From the list of available clusters, select the reported cluster\n4. Go to the 'Release channel' configuration\n5. To edit, Click on the 'UPGRADE AVAILABLE' or 'Edit release channel'(Whichever available)\n6. In the 'Edit version' pop-up, select the required release channel(Regular Channel/ Stable Channel/ Rapid Channel) from the 'Release channel' dropdown\n7. Click on 'SAVE CHANGES' or 'CHANGE'.\n\nKnow more on Release Channels here: https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-sql-instances-list' and json.rule = 'settings.userLabels[*] does not exist'``` | GCP SQL Instances without any Label information
This policy identifies the SQL DB instance which does not have any Labels. Labels can be used for easy identification and searches.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console.\n2. On left Navigation, Click on SQL\n3. Select the reported SQL instance.\n4. Click on EDIT, Add labels with the appropriate Key:Value information.\n5. Click Save. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = 'networkRuleSet.bypass does not contain AzureServices'``` | Azure Storage Account 'Trusted Microsoft Services' access not enabled
This policy identifies Storage Accounts which have 'Trusted Microsoft Services' access not enabled. Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. If the Allow trusted Microsoft services exception is enabled, the following services: Azure Backup, Azure Site Recovery, Azure DevTest Labs, Azure Event Grid, Azure Event Hubs, Azure Networking, Azure Monitor and Azure SQL Data Warehouse (when registered in the subscription), are granted access to the storage account. It is recommended to enable Trusted Microsoft Services on storage account instead of leveraging network rules.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to Storage Accounts dashboard\n3. Select the storage account you need to modify\n4. Under 'Security + networking' section, Click on 'Networking'\n5. Under 'Firewalls and virtual networks' tab, Ensure that 'Enabled from selected virtual networks and IP addresses' is selected.\n6. Under 'Exceptions', Make sure that 'Allow Azure services on the trusted services list to access this storage account' is checked.\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-oss-bucket-info' AND json.rule = 'serverSideEncryptionConfiguration.applyServerSideEncryptionByDefault.ssealgorithm equals None'``` | Alibaba Cloud OSS bucket server-side encryption is disabled
This policy identifies Object Storage Service (OSS) buckets which have server-side encryption disabled. As a best practice enable the server-side encryption to improve data security without making changes to your business or applications. OSS encrypts user data when writing the data into the hard disks deployed in the data center and automatically decrypts the data when it is accessed.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Server-side Encryption' Section, Click on 'Configure'\n5. For 'Bucket Encryption' field, Set either 'KMS' or 'AES256' encryption instead of 'None'\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-file-system' AND json.rule = kmsKeyId is empty``` | OCI File Storage File Systems are not encrypted with a Customer Managed Key (CMK)
This policy identifies the OCI File Storage File Systems that are not encrypted with a Customer Managed Key (CMK). It is recommended that File Storage File Systems should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the File System
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = (firewall does not exist or (firewall exists and _IPAddress.areAnyOutsideCIDRRange(firewall.allowed_ip,192.168.0.0/16,172.16.0.0/12,10.0.0.0/8) is true))``` | IBM Cloud Object Storage bucket is not restricted to Private IP ranges
This policy identifies IBM Cloud object storage buckets that are not restricted to private IP ranges or if the cloud object storage firewall is not configured.
IBM Cloud Storage Firewall enables users to control access to their stored data by setting up firewall rules and restricting access to authorised IP addresses or ranges, thereby enhancing security and compliance with regulatory standards. Not restricting access via the IBM Cloud Storage Firewall to private IPs increases the risk of unauthorised data access, breaches, and potential compliance violations.
It is recommended to add only private IPs to the list of authorised IPs / ranges in bucket firewall policies.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To set a list of authorised IP addresses or remove the public IP from the IBM cloud object storage,\n\n1. Log in to the IBM Cloud Console\n2. Click on the menu icon and navigate to 'Resource list'. From the list of resources, select the object storage instance in which the reported bucket resides\n3. Select the bucket to which you want to limit access to authorised IP addresses\n4. Select the 'Firewall (legacy)' dropdown under the 'Permissions' tab\n6. Click on 'Edit' and Click on 'Add' and specify a list of IP addresses from the IBM cloud private IP range in CIDR notation, for example.\n192.168.0.0/16, fe80:021b::0/64. Addresses can follow either IPv4 or IPv6 standards\n7. Click 'Add', or click on the public IP address presented in the IP address tab, and then click 'Delete'\n8. Click 'Save All' to enforce the firewall. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway-waf-policy' AND json.rule = properties.policySettings.state equals Enabled and properties.managedRules.managedRuleSets is not empty and properties.managedRules.managedRuleSets[*].ruleGroupOverrides[*].rules[?any(ruleId equals 944240 and state equals Disabled)] exists and properties.applicationGateways[*] is not empty``` | Azure Application Gateway Web application firewall (WAF) policy rule for Remote Command Execution is disabled
This policy identifies Azure Application Gateway Web application firewall (WAF) policies that have the Remote Command Execution rule disabled. It is recommended to define the criteria in the WAF policy with the rule ‘Remote Command Execution (944240)’ under managed rules to help in detecting and mitigating Log4j vulnerability.
For details:
https://www.microsoft.com/security/blog/2021/12/11/guidance-for-preventing-detecting-and-hunting-for-cve-2021-44228-log4j-2-exploitation/
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Web Application Firewall policies (WAF)'\n3. Click on the reported Web Application Firewall policies (WAF) policy\n4. Click on the 'Managed rules' from the left panel\n5. Search for '944240' in Managed rule sets and Select rule\n6. Click on the 'Enable' to enable rule\n7. Click on 'Save' to save your changes. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-vpn-gateways-summary' AND json.rule = 'TotalVPNGateways greater than 3'``` | AWS regions nearing VPC Private Gateway limit
This policy identifies if your account is near the private gateway limitation per VPC per Region. AWS provides a reasonable starting limitation for the maximum number of Virtual private gateways you can assign in each VPC. If you approach the limit in a particular VPC, this alert indicates that you have nearly exhausted your allocation.
NOTE: As per http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html Virtual private gateway per region limit is 5. This policy will trigger an alert if Virtual private gateway per region reached 80% (i.e. 4) of resource availability limit allocated.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Virtual Private Gateways' (Left Panel)\n5. Choose the Virtual Private Gateway you want to delete, which is not used or required\n6. Click on 'Actions' dropdown\n7. Click on 'Delete Virtual Private Gateway'NOTE: If Virtual Private Gateway is already in use it can not be deleted. Make sure gateways unassociated before going to deleting it.\n8. On 'Delete Virtual Private Gateway' popup dialog, Click on 'Yes, Delete'NOTE: If existing Virtual Private Gateways are properly associated and exhausted your VPC Virtual Private Gateway limit allocation, you can contact AWS for a service limit increase.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = 'lastRotatedDate exists and rotationEnabled is true and _DateTime.daysBetween($.lastRotatedDate,today()) > $.rotationRules.automaticallyAfterDays'``` | AWS Secrets Manager secret configured with automatic rotation not rotated as scheduled
This policy identifies the AWS Secrets Manager secret not rotated successfully based on the rotation schedule.
Secrets Manager stores secrets centrally, encrypts them automatically, controls access, and rotates secrets safely. By rotating secrets, you replace long-term secrets with short-term ones, limiting the risk of unauthorized use. If secrets fail to rotate in Secrets Manager, long-term secrets remain in use, increasing the risk of unauthorized access and potential data breaches.
It is recommended that proper configuration and monitoring of the rotation process be ensured to mitigate these risks.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: For help diagnosing and fixing common errors related to secrets rotation, refer to the URL:\n\nhttps://docs.aws.amazon.com/secretsmanager/latest/userguide/troubleshoot_rotation.html. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( source equals "0.0.0.0/0" and direction equals "inbound" and action equals "allow" and ( (protocol equals "tcp" and (( destination_port_max greater than 3389 and destination_port_min less than 3389 ) or ( destination_port_max equals 3389 and destination_port_min equals 3389 ))) or protocol equals "all" ))] exists``` | IBM Cloud VPC ACL allow ingress rule from 0.0.0.0/0 to RDP port
This policy identifies IBM Cloud VPC Access Control List which are having ingress rule that allows traffic from 0.0.0.0/0 to RDP port. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. It is recommended to review VPC ACL rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access Control Lists'\n3. Select the 'Access Control Lists' reported in the alert\n4. Under 'Inbound rules'\n5. Click on three dots on the right corner of a row containing rule that has a port range value of ALL or a port range that includes port 3389 and has a Source of 0.0.0.0/0\n6. Click on 'Delete'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'``` | AWS S3 Object Versioning is disabled
This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5500,5500)"``` | Alibaba Cloud Security group allow internet traffic to VNC Listener port (5500)
This policy identifies Security groups that allow inbound traffic on VNC Listener port (5500) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5500, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity.type does not contain UserAssigned``` | Azure Machine Learning workspace not configured with user-assigned managed identity
This policy identifies Azure Machine Learning workspaces that are not configured with a user-assigned managed identity.
By default, Azure Machine Learning workspaces use system-assigned managed identities to access resources like Azure Container Registry, Key Vault, Storage, and Application Insights. However, user-assigned managed identities offer better control over the identity's lifecycle and consistent access management across multiple resources. Since system-assigned identities are tied to the workspace and deleted if the workspace is removed, using a user-assigned identity allows access management independently, enhancing security and compliance.
As a security best practice, it is recommended to configure the Azure Machine Learning workspace with a user-assigned managed identity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Method 1: Updating an Existing Workspace\n1. Once an Azure Machine Learning workspace is created with a System-Managed Identity, you cannot change it to use only a User-Assigned Managed Identity. You can update the workspace to use both System-Managed and User-Assigned Managed Identities.\n2. For detailed instructions on how to configure this, visit the following URL: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-identity-based-service-authentication?view=azureml-api-2&tabs=cli#add-a-user-assigned-managed-identity-to-a-workspace-in-addition-to-a-system-assigned-identity\n\nor\n\nMethod 2: Deleting the Existing Workspace and Creating a New Workspace\n1. To use only a User-Assigned Managed Identity, delete the existing workspace. \n2. Create a new Azure Machine Learning workspace. During the setup, select 'User Assigned Identity' under the 'Identity' tab to ensure it exclusively uses a User-Assigned Managed Identity from the start.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-credential-report' AND json.rule = 'user equals "<root_account>" and mfa_active is false and arn does not contain gov:'``` | AWS MFA is not enabled on Root account
This policy identifies root account which has MFA enabled. Root accounts have privileged access to all AWS services. Without MFA, if the root credentials are compromised, unauthorized users will get full access to your account.
NOTE: This policy does not apply to AWS GovCloud Accounts. As you cannot enable an MFA device for AWS GovCloud (US) account root user. For more details refer: https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-console.html
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MFA'].
Mitigation of this issue can be done as follows: 1. Sign in to the 'AWS Console' using Root credentials.\n2. Navigate to the 'IAM' service.\n3. On the dashboard, click on 'Activate MFA on your root account', click on 'Manage MFA' and follow the steps to configure MFA for the root account.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(IPProtocol equals "all")] exists``` | GCP Firewall with Inbound rule overly permissive to All Traffic
This policy identifies GCP Firewall rules which allows inbound traffic on all protocols from public internet. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed need to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to VPC Network\n3. Go to the Firewall rules\n4. Click on the reported Firewall rule\n5. Click Edit\n6. Modify Source IP ranges to specific IP and modify Protocols and ports to specific protocol and port\n7. Click Save. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case "Running" AND kind contains "functionapp" AND kind does not contain "workflowapp" AND kind does not equal "app" AND properties.clientCertEnabled is false``` | Azure Function App client certificate is disabled
This policy identifies Azure Function App which are not set with client certificate. Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Incoming client certificates', Set 'Client certificate mode' to Require\n6. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\nAzure CLI Command - \"az functionapp update --set clientCertEnabled=true --name MyFunctionApp --resource-group MyResourceGroup\". |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals StorageAccounts and properties.pricingTier does not equal Standard)] exists``` | Azure Microsoft Defender for Cloud is set to Off for Storage
This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Storage is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Storage.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Storage' Select 'On' under Plan.\n8. Select 'Save'. |
```config from cloud.resource where api.name = 'azure-vm-list' AND json.rule = ['Extensions'].['Microsoft.PowerShell.DSC'].['settings'].['properties'].['hostPoolName'] exists and powerState contains running as X; config from cloud.resource where api.name = 'azure-disk-list' AND json.rule = provisioningState equal ignore case Succeeded and (encryption.type does not contain "EncryptionAtRestWithCustomerKey" or encryption.diskEncryptionSetId does not exist) as Y; filter ' $.X.id equal ignore case $.Y.managedBy '; show Y;``` | Azure Virtual Desktop disk encryption not configured with Customer Managed Key (CMK)
This policy identifies Azure Virtual Desktop environments where disk encryption is not configured using a Customer Managed Key (CMK).
Disk encryption is crucial for protecting data in Azure Virtual Desktop environments. By default, disks may be encrypted with Microsoft-managed keys, which might not meet specific security requirements. Using Customer Managed Keys (CMKs) offers better control over encryption, allowing organizations to manage key rotation, access, and revocation, thereby enhancing data security and compliance.
As a best practice, it is recommended to configure disk encryption for Azure Virtual Desktop with a Customer Managed Key (CMK).
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: To enable disk encryption on any disks attached to a VM, you must first stop the VM.\n\n1. Log in to Azure Portal and search for 'Disks'.\n2. Select 'Disks'.\n4. Select the reported disk.\n5. Under 'Settings' select 'Encryption'.\n6. For 'Key management', select 'Customer-managed key' from drop-down list.\n6. For the disk encryption set, select an existing one. If none are available, create a new disk encryption set.\n7. Click on 'Save'.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size greater than 0 and volume_attachments[*].type equals data and encryption equal ignore case provider_managed``` | IBM Cloud data disk is not encrypted with customer managed key
This policy identifies IBM Cloud data storage volumes attached to a virtual server instance which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: A storage volume can be encrypted with customer managed keys only at the time of creation. Please\ncreate a snapshot following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a storage volume from the above created snapshot with customer managed encryption:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instance'\n3. From the list, click on the name of an instance. The instance must be in a Running state.\n4. On the Instance details page, scroll to the list of Storage volumes and click 'Attach'.\n A side panel opens for you to define the volume attachment.\n5. From the Attach data volume panel, expand the list of Block volumes and select 'Create a data volume'.\n6. Select 'Import from snapshot'. Expand the Snapshot list and select a snapshot.\n7. Optionally, increase the size of the volume within the specified range.\n8. Under 'Encryption' section, select either 'Key protect' or 'Hyper Protect Crypto Services'.\n9. Under 'Encryption service instance' and 'Key name', select the instance and key to be used for encryption.\n10.Click Save. The side panel closes and messages indicate that the restored volume is being attached to the instance.\n\nPlease delete reported data disk following below url:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' and json.rule = osType exists and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of ("EncryptionAtRestWithCustomerKey","EncryptionAtRestWithPlatformAndCustomerKeys","EncryptionAtRestWithPlatformKey")``` | Azure VM OS disk is not configured with any encryption
This policy identifies VM OS disks that are not configured with any encryption. Azure encrypts OS disks that are not configured with any encryption. Azure offers Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK] by default for managed disks. It is recommended to enable default encryption or you may optionally choose to use a customer-managed key to protect from malicious activity.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Disks'\n3. Select the reported OS disk you want to modify\n4. Select 'Encryption' under 'Settings'\n5. Select 'Encryption Type' according to your encryption requirement.\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND cloud.account = 'jScheel AWS Account' AND api.name = 'aws-ec2-describe-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = ipPermissions[?any( toPort equals 51820 and ipRanges[*] contains "0/0" )] exists as Y; config from cloud.resource where api.name = 'aws-ec2-describe-route-tables' AND json.rule = routes[?any( state equals active and gatewayId contains "igw" and destinationCidrBlock contains "0/0" )] exists as Z; filter ' $.X.securityGroups[*].groupId == $.Y.groupId and $.X.subnetId == $.Z.associations[*].subnetId'; show Z;``` | jScheel Wireguard instance allows ANY toPort on 51820
Wireguard instance allows ANY toPort on 51820
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-describe-mount-targets' AND json.rule = 'fileSystemDescription.encrypted is false'``` | AWS Elastic File System (EFS) with encryption for data at rest is disabled
This policy identifies Elastic File Systems (EFSs) for which encryption for data at rest is disabled. It is highly recommended to implement at-rest encryption in order to prevent unauthorized users from reading sensitive data saved to EFSs.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: AWS EFS Encryption of data at rest can only be enabled during file system creation. So to resolve this alert, create a new EFS with encryption enabled, then migrate all required file data from the reported EFS to this newly created EFS and delete reported EFS.\n\nTo create a new EFS with encryption enabled, perform the following:\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Click on the 'Create file system' button\n6. On the 'Create file system' pop-up window, \n7. Click on 'Customize' button to replicate the configurations of alerted file system as required\n8. Ensure 'Enable encryption of data at rest' is selected\n9. On the 'Review and create' step, Review all your setting and click on the 'Create' button\n\nTo delete reported EFS which does not has encryption, perform the following:\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Select the reported file system\n6. Click on 'Delete' button\n7. In the 'Delete file system' popup box, To confirm the deletion enter the file system's ID and Click on 'Confirm'. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy buecs
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deleteuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateuser and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateusercapabilities and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updateuserstate) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for user changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM User changes. Monitoring and alerting on changes to IAM User will help in identifying changes to the security posture. It is recommended that a Event Rule and Notification be configured to catch changes made to Identity and Access Management (IAM) Users.
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting: User – Create, User – Delete, User – Update, User Capabilities – Update, User State – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-registry' AND json.rule = skuName contains "Classic"``` | Azure Container Registry using the deprecated classic registry
This policy identifies an Azure Container Registry (ACR) that is using the classic SKU. The initial release of the Azure Container Registry (ACR) service that was offered as a classic SKU is being deprecated and will be unavailable after April 2019. As a best practice, upgrade your existing classic registry to a managed registry.
For more information, visit https://docs.microsoft.com/en-us/azure/container-registry/container-registry-upgrade
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Select 'All services' > 'Container Registries'\n3. Select the container registry you need to modify.\n4. Select 'Upgrade to managed registry'.\n5. Select 'OK' to confirm the upgrade.. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or passwordReusePrevention equals null or passwordReusePrevention !isType Integer or passwordReusePrevention < 1'``` | AWS IAM password policy allows password reuse
This policy identifies IAM policies which allow password reuse . AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Click on 'Account Settings', check 'Prevent password reuse'.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( source equals "0.0.0.0/0" and direction equals "inbound" and action equals "allow" and ( (protocol equals "tcp" and (( destination_port_max greater than 22 and destination_port_min less than 22 ) or ( destination_port_max equals 22 and destination_port_min equals 22 ))) or protocol equals "all" ))] exists``` | IBM Cloud VPC ACL allow ingress rule from 0.0.0.0/0 to SSH port
This policy identifies IBM Cloud VPC Access Control List which are having ingress rule that allows traffic from 0.0.0.0/0 to SSH port. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. It is recommended to review VPC ACL rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access Control Lists'\n3. Select the 'Access Control Lists' reported in the alert\n4. Under 'Inbound rules'\n5. Click on three dots on the right corner of a row containing rule that has a port range value of ALL or a port range that includes port 22 and has a Source of 0.0.0.0/0\n6. Click on 'Delete'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-policy' AND json.rule = lifecycleState equals ACTIVE and (statements[*] contains "to manage all-resources in tenancy" or statements[*] contains "to manage all-resources IN TENANCY") and name does not contain "Tenant Admin Policy"``` | OCI IAM policy with full administrative privileges across the tenancy to non Administrator
This policy identifies IAM policies with full administrative privileges across the tenancy to non Administrators. IAM policies are the means by which privileges are granted to users, groups, or services. It is recommended to practice the Principle of least privilege, which limits users' access rights to strictly required to do their jobs.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Identity -> Policies\n3. In the compartment dropdown, choose the root compartment. Open the reported policy to view the policy statement.\n4. Remove policy statement that allows any group other than Administrators or any service access to manage all resources in the tenancy.. |
```config from cloud.resource where cloud.type='azure' and api.name= 'azure-container-registry' as X; config from cloud.resource where api.name = 'azure-resource-group' as Y; filter ' $.X.resourceGroupName equals $.Y.name and $.Y.isDedicatedContainerRegistryGroup is false' ; show X;``` | Azure Container Registry does not use a dedicated resource group
Placing your Azure Container Registry (ACR) in a dedicated Azure resource group, allows you to minimize the risk of accidentally deleting the collection of images in the registry when you delete the container instance resource group.
This policy identifies ACRs that reside in resource groups that contains non-ACR resources. For more information about ACR best practices, visit https://docs.microsoft.com/en-us/azure/container-registry/container-registry-best-practices
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To remediate this alert, move all non-ACR resources to another resource group. To move resources to another resource group follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and config.httpLoggingEnabled exists and config.httpLoggingEnabled is false``` | Azure App service HTTP logging is disabled
This policy identifies Azure App services that have HTTP logging disabled.
By enabling HTTP logging for your app service, you can collect log information and use it to monitor and troubleshoot your app, as well as identify any potential security issues or threats. This can help to ensure that your app is running smoothly and is secure from potential attacks.
As best practice, it is recommended to enable HTTP logging on your app service.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to App Services dashboard\n3. Click on the reported App service\n4. Under the 'Monitoring' menu, click on 'App Service logs'\n5. Under 'Web server logging', select Storage to store logs on blob storage, or File System to store logs on the App Service file system.\n6. In Retention Period (Days), set the number of days the logs should be retained.\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-defender-for-cloud-security-contact' AND json.rule = properties.alertNotifications.state does not equal ignore case ON and properties.alertNotifications.minimalSeverity equal ignore case High``` | Azure 'Notify about alerts with the following severity' is Set to 'High'
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = 'encrypted is false'``` | Alibaba Cloud disk encryption is disabled
This policy identifies disks for which encryption is disabled. As a best practice enable disk encryption to improve data security without making changes to your business or applications. Snapshots created from encrypted disks and new disks created from these snapshots are automatically encrypted.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: Alibaba Cloud disk can only be encrypted at the time of disk creation. So to resolve this alert, create a new disk with encryption and then migrate all required disk data from the reported disk to this newly created disk.\n\nTo create an Alibaba Cloud disk with encryption:\n1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click on 'Disks' which is under 'Storage & Snapshots'\n4. Click on 'Create Disk'\n5. Check the 'Disk Encryption' box in the 'Disk' section\n6. Click on 'Preview Order' make sure parameters are chosen correctly\n7. Click on 'Create', After you create a disk, attach that disk to other resources per your requirements.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-virtual-machine-scale-set' AND json.rule = properties.virtualMachineProfile.diagnosticsProfile.bootDiagnostics.enabled is false``` | Azure Virtual Machine scale sets Boot Diagnostics Disabled
This policy identifies Azure Virtual Machines scale sets which has Boot Diagnostics setting Disabled. Boot Diagnostics when enabled for virtual machine, captures Screenshot and Console Output during virtual machine startup. This would help in troubleshooting virtual machine when it enters a non-bootable state.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'All services' from the left pane\n3. Go to 'Compute' under 'Categories'\n4. Select 'Virtual Machine scale sets'\n5. Select the reported virtual machine scale sets\n6. Click on 'Boot Diagnostics' under 'Support + troubleshooting'\n7. Select 'On'\n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = (sku.tier equals GeneralPurpose or sku.tier equals MemoryOptimized) and properties.userVisibleState equals Ready and properties.infrastructureEncryption equals Disabled``` | Azure PostgreSQL database server Infrastructure double encryption is disabled
This policy identifies PostgreSQL database servers in which Infrastructure double encryption is disabled. Infrastructure double encryption adds a second layer of encryption using service-managed keys. It is recommended to enable infrastructure double encryption on PostgreSQL database servers so that encryption can be implemented at the layer closest to the storage device or network wires.
For more details:
https://docs.microsoft.com/en-us/azure/postgresql/concepts-infrastructure-double-encryption
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: Configuring Infrastructure double encryption for Azure Database for PostgreSQL is only allowed during server create. Once the server is provisioned, you cannot change the storage encryption.\n\nTo create an Azure Database for PostgreSQL server with Infrastructure double encryption, follow below URL:\nhttps://docs.microsoft.com/en-us/azure/postgresql/howto-double-encryption\n\nNOTE: Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case FileIntegrityMonitoring AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)``` | Azure Microsoft Defender for Cloud set to Off for File Integrity Monitoring
This policy identifies Azure Microsoft Defender for Cloud where the File Integrity Monitoring is set to Off.
File Integrity Monitoring tracks critical system files in Windows and Linux for unauthorized changes, helping to identify potential attacks. Disabling File Integrity Monitoring leaves your system vulnerable to unnoticed alterations, increasing the risk of data breaches or system failures. Enabling FIM enhances security by alerting you to suspicious changes, allowing for proactive threat detection and prevention of unauthorized modifications to system files.
As a security best practice, it is recommended to enable File Integrity Monitoring in Azure Microsoft Defender for Cloud.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'File Integrity Monitoring' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains AuthorizeSecurityGroupIngress and $.X.filterPattern contains AuthorizeSecurityGroupEgress and $.X.filterPattern contains RevokeSecurityGroupIngress and $.X.filterPattern contains RevokeSecurityGroupEgress and $.X.filterPattern contains CreateSecurityGroup and $.X.filterPattern contains DeleteSecurityGroup) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for AWS Security group changes
This policy identifies the AWS regions that do not have a log metric filter and alarm for security group changes.
Security groups act as virtual firewalls that control inbound and outbound traffic to AWS resources. If changes to these groups go unmonitored, it could result in unauthorized access or expose sensitive data to the public internet.
It is recommended to create a metric filter and alarm for security group changes to promptly detect and respond to any unauthorized modifications, thereby maintaining the integrity and security of your AWS environment.
NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi-trail enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n\n{ ($.eventName = AuthorizeSecurityGroupIngress) ||\n($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName =\nRevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) ||\n($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\n\nand Click on 'NEXT'.\n\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review the details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html. |
```config from cloud.resource where api.name = 'azure-frontdoor-waf-policy' AND json.rule = properties.policySettings.enabledState equals Enabled and properties.managedRules.managedRuleSets is not empty and properties.managedRules.managedRuleSets[*].ruleGroupOverrides[*].rules[?any(action equals Block and ruleId equals 944240 and enabledState equals Disabled)] exists as X; config from cloud.resource where api.name = 'azure-frontdoor' AND json.rule = properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink exists and properties.provisioningState equals Succeeded as Y; filter '$.Y.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id contains $.X.name'; show X;``` | Azure Front Door Web application firewall (WAF) policy rule for Remote Command Execution is disabled
This policy identifies Azure Front Door Web application firewall (WAF) policies that have the Remote Command Execution rule disabled. It is recommended to define the criteria in the WAF policy with the rule ‘Remote Command Execution (944240)’ under managed rules to help in detecting and mitigating Log4j vulnerability.
For details:
https://www.microsoft.com/security/blog/2021/12/11/guidance-for-preventing-detecting-and-hunting-for-cve-2021-44228-log4j-2-exploitation/
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Web Application Firewall policies (WAF)'\n3. Click on the reported Web Application Firewall policies (WAF) policy\n4. Click on the 'Managed rules' from the left panel\n5. Search '944240' rule from search bar and Select rule\n6. Click on the 'Enable' to enable rule. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.Owner))] exists``` | bobby Copy of AWS SNS topic with cross-account access
This policy identifies AWS SNS topics that are configured with cross-account access. Allowing unknown cross-account access to your SNS topics will enable other accounts and gain control over your AWS SNS topics. To prevent unknown cross-account access, allow only trusted entities to access your Amazon SNS topics by implementing the appropriate SNS policies.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. In the Access Policy section, verify all ARN values in 'Principal' elements are from trusted entities; If not remove those ARN from the entry.\n9. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'``` | Azure App Service Web app doesn't use latest Java version
This policy identifies Azure web apps that don't use the latest Java version. Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes if any.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure console\n2. Go to App Services\n3. Click on the reported App\n4. Under Settings section, Click on Configuration\n5. Select General settings\n6. In Stack settings section, ensure that Stack is set with the latest Java version.\n7. Click on Save. |
```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-blob-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;``` | Azure Storage logging is not Enabled for Blob Service for Read Write and Delete requests
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-disk' AND json.rule = status contains In_use and enableAutomatedSnapshotPolicy is false``` | Alibaba Cloud disk automatic snapshot policy is disabled
This policy identifies disks which have automatic snapshot policy disabled. As a best practice, enable automatic snapshot policy to prevent irreversible data loss from accidental or malicious operations.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To apply an automatic snapshot policy on the reported disk follow below URL:\nhttps://www.alibabacloud.com/help/doc-detail/25457.htm. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_create_hyperion_policy_multi_cloud_child_policies_ss_finding_1
Description-d6a7725e-0ded-439f-b5cb-740eaf1df571
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where resource.status = Active AND api.name = 'oci-compute-instance' AND json.rule = lifecycleState exists``` | Copy of OCI Hosts test - Ali
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of ("allAuthenticatedUsers","allUsers"))] exists``` | GCP Cloud Function is publicly accessible
This policy identifies GCP Cloud Functions that are publicly accessible. Allowing 'allusers' / 'allAuthenticatedUsers' to cloud functions can lead to unauthorised invocations of the functions or unwanted access to sensitive information. It is recommended to follow least privileged access policy and grant access restrictively.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to GCP console\n2. Navigate to service 'Cloud Functions'\n3. Click on the function on which the alert is generated\n4. Go to tab 'PERMISSIONS'\n5. Review the roles to see if 'allusers'/'allAuthenticatedUsers' is present\n6. Click on the delete icon to revoke access from 'allusers'/'allAuthenticatedUsers'\n7. On Pop-up select the check box to confirm \n8. Click on 'REMOVE'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equal ignore case Arm and properties.pricingTier does not equal ignore case Standard)] exists``` | Azure Microsoft Defender for Cloud set to Off for Resource Manager
This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Resource Manager (ARM) set to Off. Enabling Azure Defender for ARM provides protection against issues like Suspicious resource management operations, Use of exploitation toolkits, Lateral movement from the Azure management layer to the Azure resources data plane. It is highly recommended to enable Azure Defender for ARM.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Expand 'Select Defender plan' \n7. Select 'On' status for 'Resource Manager' \n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationPortRange contains _Port.inRange(22,22) or destinationPortRanges[*] contains _Port.inRange(22,22) ))] exists``` | Azure Network Security Group allows all traffic on SSH port 22
This policy identifies Network security groups (NSG) that allow all traffic on SSH port 22. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.emails[*] is empty``` | Azure SQL Server ADS Vulnerability Assessment 'Send scan reports to' is not configured
This policy identifies Azure SQL Server which has ADS Vulnerability Assessment 'Send scan reports to' not configured. This setting enables ADS - VA scan reports being sent to email ids that are configured at 'Send scan reports to' field. It is recommended to update 'Send scan reports to' with email ids which would help in reducing time required for identifying risks and taking corrective measures.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. Specify one or more email ids to 'Send scan reports to' under 'VULNERABILITY ASSESSMENT SETTINGS'\n6. 'Save' your changes. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy ojnou
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-ecs-service' AND json.rule = launchType equals EC2 as X; config from cloud.resource where api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and registeredContainerInstancesCount equals 0 as Y; filter '$.X.clusterArn equals $.Y.clusterArn'; show Y;``` | AWS ECS cluster not configured with a registered instance
This policy identifies ECS clusters that are not configured with a registered instance. ECS container instance is an Amazon EC2 instance that is running the Amazon ECS container agent and has been registered into an Amazon ECS cluster. It is recommended to remove Idle ECS clusters to reduce the container attack surface or register a new instance for the reported ECS cluster.
For details:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To delete the reported idle ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/delete_cluster.html\n\nTo register a new instance for reported ECS Cluster follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy poumk
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-ec2-ebs-encryption' AND json.rule = ebsEncryptionByDefault is false as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus does not equal not-opted-in as Y; filter '$.X.region equals $.Y.regionName'; show X;``` | AWS EBS volume region with encryption is disabled
This policy identifies AWS regions in which new EBS volumes are getting created without any encryption. Encrypting data at rest reduces unintentional exposure of data stored in EBS volumes. It is recommended to configure EBS volume at the regional level so that every new EBS volume created in that region will be enabled with encryption by using a provided encryption key.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable encryption at region level by default, follow below URL:\n https://docs.aws.amazon.com/ebs/latest/userguide/work-with-ebs-encr.html#encryption-by-default\n\n Additional Information: \n\n To detect existing EBS volumes that are not encrypted ; refer Saved Search:\n AWS EBS volumes are not encrypted_RL\n\n To detect existing EBS volumes that are not encrypted with CMK, refer Saved Search:\n AWS EBS volume not encrypted using Customer Managed Key_RL. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nic-list' AND json.rule = ['properties.virtualMachine'].id is not empty and ['properties.enableIPForwarding'] exists and ['properties.enableIPForwarding'] is true``` | Azure Virtual machine NIC has IP forwarding enabled
This policy identifies Azure Virtual machine NIC which have IP forwarding enabled. IP forwarding on a virtual machine's NIC allows the machine to receive and forward traffic addressed to other destinations. As a best practice, before you enable IP forwarding in a Virtual Machine NIC, review the configuration with your network security team to ensure that it does not allow an attacker to exploit the set up to route packets through the host and compromise your network.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1.Login to Azure Portal\n2.Click on 'All services' on left Navigation\n3.Click on 'Network interfaces' under 'Networking'\n4.Click on reported resource\n5.Click on 'IP configurations' under Settings\n6.Select 'Disabled' for 'IP forwarding'\n7.Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-definition' AND json.rule = properties.type equals "CustomRole" and properties.assignableScopes[*] contains "/" and properties.permissions[*].actions[*] starts with "*"``` | Azure subscriptions with custom roles are overly permissive
This policy identifies azure subscriptions with custom roles are overly permissive. Least privilege access rule should be followed and only necessary privileges should be assigned instead of allowing full administrative access.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: Check the usage of the role identified. Verify impact caused by Updating/deleting the role. Then follow below URL for updating or deleting custom role:\nhttps://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles-portal#update-a-custom-role. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-action-trail' AND json.rule = 'status equals Disable and isLogging is false'``` | Alibaba Cloud ActionTrail logging is disabled
This policy identifies ActionTrails which have logging disabled. As a best security practice, it is recommended to enable logging, as ActionTrail logs can be used in scenarios as security analysis, resource change tracking, and compliance audit.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to ActionTrail\n3. In the left navigation pane, click on 'Trail List'\n4. Click on reported trail\n5. In the upper right corner of the configuration page, move the slider to the right to start logging for the trail.\n6. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = externalIdentifier contains null and (email does not exist or emailVerified is false)``` | OCI IAM local (non-federated) user account does not have a valid and current email address
This policy identifies the OCI Iam local (non-federated) users that do not have valid and current email address configured. It is recommended that OCI Iam local (non-federated) users are configured with valid and current email address to tie the account to identity in your organization. It also allows that user to reset their password if it is forgotten or lost.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login into OCI Console\n2. Select Identity from Services menu\n3. Select Users from Identity menu\n4. Click on the local (non-federated) user reported in the alert\n5. Click on Edit User\n6. Enter a valid and current email address in the EMAIL text box\n7. Click Save Changes. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = logging.logBucket equals $.name``` | GCP storage bucket is logging to itself
This policy identifies GCP storage buckets that are sending logs to themselves. When storage buckets use the same bucket to send their access logs, a loop of logs will be created, which is not a security best practice. It is recommended to spin up new and different log buckets for storage bucket logging.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To resolve the alert, a new bucket should be created or an existing bucket other than the alerting bucket itself should be set for logging by following steps in the below-mentioned link.\n\nhttps://cloud.google.com/storage/docs/access-logs#delivery. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nsg-list' AND json.rule = flowLogsSettings does not exist or flowLogsSettings.enabled is false``` | Azure Network Watcher Network Security Group (NSG) flow logs are disabled
This policy identifies Azure Network Security Groups (NSG) for which flow logs are disabled. To perform this check, enable this action on the Azure Service Principal: 'Microsoft.Network/networkWatchers/queryFlowLogStatus/action'.
NSG flow logs, a feature of the Network Watcher app, enable you to view information about ingress and egress IP traffic through an NSG. The flow logs include information such as:
- Outbound and inbound flows on a per-rule basis.
- Network interface to which the flow applies.
- 5-tuple information about the flow (source/destination IP, source/destination port, protocol).
- Whether the traffic was allowed or denied.
As a best practice, enable NSG flow logs to improve network visibility.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure Network Watcher Network Security Group (NSG) flow log, follow below URL:\nhttps://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-portal#enable-nsg-flow-log. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.