query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and (identity.type does not exist or (identity.type exists and identity.type equal ignore case None))```
Azure Container Instance not configured with the managed identity This policy identifies Azure Container Instances (ACI) that are not configured with the managed identity. The managed identity is authenticated with Azure AD, developers don't have to store any credentials in code. So It is recommended to configure managed identity on all your container instances. For more details: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable managed identity on your container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-managed-identity.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals "ACTIVE" and shieldedInstanceConfig.enableIntegrityMonitoring is false```
GCP Vertex AI Workbench user-managed notebook has Integrity monitoring disabled This policy identifies GCP Vertex AI Workbench user-managed notebooks that have Integrity monitoring disabled. Integrity Monitoring continuously monitors the boot integrity, kernel integrity, and persistent data integrity of the underlying VM of the shielded user-managed notebooks. It detects unauthorized modifications or tampering, enhancing security by verifying the trusted state of VM components throughout their lifecycle. It provides active alerting allowing administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. It is recommended to enable integrity monitoring for user-managed notebooks to detect and mitigate advanced threats like rootkits and bootkit malware. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Enable 'Turn on Integrity Monitoring'\n11. Click on 'Save'\n12. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipProtocol equals tcp or ipProtocol equals icmp or ipProtocol equals icmpv6 or ipProtocol equals udp) and (ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0))] exists)```
Copy of navnon-onboarding-policy navnon-onboarding-policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-policy-assignments' AND json.rule = '((name == SecurityCenterBuiltIn and properties.parameters.jitNetworkAccessMonitoringEffect.value equals Disabled) or (name == SecurityCenterBuiltIn and properties.parameters[*] is empty and properties.displayName does not start with "ASC Default"))'```
Azure Microsoft Defender for Cloud JIT network access monitoring is set to disabled This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) policies which have JIT network access monitoring set to disabled. Enabling JIT Network Access will enhance the protection of VMs by creating a Just in Time VM. The JIT VM with NSG rule will restrict the availability of access to the ports to connect to the VM for a pre-set time and only after checking the Role Based Access Control permissions of the user. This feature will control the brute force attacks on the VMs. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Choose the reported subscription\n5. Click on the 'Security policy' under 'Policy settings' section\n6. Click on 'SecurityCenterBuiltIn'\n7. Select 'Parameters' tab\n8. Set the 'Management ports of virtual machines should be protected with just-in-time network access control' to 'AuditIfNotExists'\n9. If no other changes required then Click on 'Review + save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```
Copy of Copy of build information This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles contains roles/editor or roles contains roles/owner and (user does not start with g-bootstrap-svcacct-terraform and user does not equal "[email protected]" and user does not equal "[email protected]" and user does not contain "iam.gserviceaccount.com") and (user does not contain "appspot" and user does not contain "cloud" and user does not contain "developer")```
GM-Mukhtar-AyawDaw This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-datastores' AND json.rule = (properties.datastoreType equal ignore case AzureFile or properties.datastoreType equal ignore case AzureBlob) and properties.credentials.credentialsType equal ignore case AccountKey```
Azure Machine Learning workspace Storage account Datastore using Account key based authentication This policy identifies Azure Machine Learning workspace datastores that use storage account keys for authentication. Account key-based authentication is a security risk because it grants full, unrestricted access to the storage account, including the ability to read, write, and delete all data. If compromised, attackers can control all data in the account. This method lacks permission granularity and time limits, increasing the risk of exposing sensitive information. Using SAS tokens provides more granular control, allowing you to limit access to specific resources and set time-bound access, which enhances security and reduces risks in production environments. As a security best practice, it is recommended to use SAS tokens for authenticating Azure Machine Learning datastores. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported Datastore is associated with\n4. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n5. A new tab will open for Azure ML Studio\n6. In the left panel, under 'Assets' section, click on the 'Data'\n7. Select the 'Datastores' tab at the top\n8. Click on the reported Datastore\n9. Click on the 'Update authentication' tab at the top\n9. A side panel will appear on the right, configure the 'Authentication type' as 'SAS token' and enter the token value\n10. Click 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = 'isLegacy is true and properties.isCapturingLogsForAllRegions is false'```
Azure log profile not capturing activity logs for all regions This policy identifies Azure log profiles that are not capturing activity logs for all regions. Activity logs are exported from all the Azure supported regions/locations means that logs for potentially unexpected activities occurring in otherwise unused regions are stored and made available for incident response and investigations. Note: Since this type of logging is not deprecated from the Cloud service provider yet, we support it until it is removed. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Execute the command to check the number of regions present on the account: az account list-locations --query '[*].name' | grep -P 'w+' | wc -l\n2. Execute the command to check the number of regions added to the log profile: az monitor log-profiles list --query '[*].locations' | grep -P 'w+' | wc -l\n3. In case there is a difference in the count of regions from step 1 and step 2, Execute the command to list all regions az account list-locations --query '[*].name'\n4. Use the listed regions from step 3 and Update the Legacy Log profiles activity logs for all regions by following the below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=cli#managing-legacy-log-profiles.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-data-factory-v2' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.publicNetworkAccess equal ignore case Enabled```
Azure Data Factory (V2) configured with overly permissive network access This policy identifies Data factories (V2) configured with overly permissive network access. A Data factory managed virtual network along with managed private endpoints protects against data exfiltration. It is recommended to configure the Data factory with a private endpoint; so that the Data factory is accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Data factories'\n3. Click on the reported Data factory\n4. Select 'Networking' under 'Settings' from left panel \n5. In 'Private endpoint connections' tab, Create a private endpoint as per your requirement.\n6. Once Private endpoint is created; In 'Network access' tab, Select the 'Private endpoint'\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service-environment' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.clusterSettings exists and properties.clusterSettings[?any(name equal ignore case FrontEndSSLCipherSuiteOrder)] does not exist```
Azure App Service Environment configured with weak TLS cipher suites This policy identifies Azure App Service Environments that are configured with weak TLS Cipher suites. Azure App Service Environments host web applications and APIs in a dedicated and isolated environment. When these environments are configured with weak TLS Cipher suites, they can expose sensitive data to potential security risks. Weak cipher suites may allow attackers to intercept and decrypt communication between clients and the App Service Environment, leading to unauthorized access, data breaches, and potential compliance violations. The recommended cipher suites are TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 and TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. As best practice, it is recommended to avoid using weak TLS Cipher suites to enhance security and protect sensitive data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the documentation:\nhttps://learn.microsoft.com/en-us/azure/app-service/environment/app-service-app-service-environment-custom-settings.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equal ignore case Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and access equal ignore case Allow and direction equal ignore case Inbound and ((protocol equal ignore case Tcp and (destinationPortRange contains * or destinationPortRange contains _Port.inRange(80,80) or destinationPortRange contains _Port.inRange(443,443) or destinationPortRanges any equal * or destinationPortRanges[*] contains _Port.inRange(80,80) or destinationPortRanges contains _Port.inRange(443,443) )) or (protocol contains * and (destinationPortRange contains _Port.inRange(80,80) or destinationPortRange contains _Port.inRange(443,443) or destinationPortRanges[*] contains _Port.inRange(80,80) or destinationPortRanges contains _Port.inRange(443,443) ))) )] exists```
Azure Network Security Group having Inbound rule overly permissive to HTTP(S) traffic This policy identifies Network Security Groups (NSGs) that have inbound rules allowing overly permissive access to HTTP or HTTPS traffic. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. Overly permissive inbound rules for HTTP(S) traffic increase the risk of unauthorized access and potential attacks on your network resources. This can lead to data breaches, exposure of sensitive information, and other security incidents. As a best practice, it is recommended to configure NSGs to restrict HTTP(S) traffic to only necessary and trusted IP addresses. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-sync-service' AND json.rule = properties.provisioningState equals Succeeded and properties.incomingTrafficPolicy equals AllowAllTraffic```
Azure Storage Sync Service configured with overly permissive network access This policy identifies Storage Sync Services configured with overly permissive network access. A Storage Sync Service is a management construct that represents registered servers and sync groups. Allowing all traffic to the Sync Service may allow a bad actor to brute force their way into the system and potentially get access to the entire network. With a private endpoint, the network traffic path is secured on both ends and access is restricted to only defined authorized entities. It is recommended to configure the Storage Sync Service with private endpoints to minimize the access vector. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage Sync Services dashboard \n3. Click on the reported Storage Sync Service\n4. Under the 'Settings' menu, click on 'Network'\n5. Under 'Allow access from' select 'Private endpoints only'\n6. Click on 'Private endpoint' and Create a private endpoint with required parameters \n7. Click on 'Save'.
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-storage-account-blob-diagnostic-settings' AND json.rule = (properties.logs[?(@.categoryGroup)] exists and properties.logs[*].enabled any true) or (properties.logs[?(@.category)] exists and properties.logs[*].enabled all true) as Y; filter 'not($.X.name equal ignore case $.Y.StorageAccountName)'; show X;```
Azure Storage account diagnostic setting for blob is disabled This policy identifies Azure Storage account blobs that have diagnostic logging disabled. By enabling diagnostic settings, you can capture various types of activities and events occurring within these storage account blobs. These logs provide valuable insights into the operations, performance, and security of the storage account blobs. As a best practice, it is recommended to enable diagnostic logs on all storage account blobs. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the Storage Accounts dashboard\n3. Click on the reported Storage account\n4. Under the 'Monitoring' menu, click on 'Diagnostic settings'\n5. Select the blob resource\n6. Under 'Diagnostic settings', click on 'Add diagnostic setting'\n7. At the top, enter the 'Diagnostic setting name'\n8. Under 'Logs', select all the checkboxes under 'Categories'\n9. Under 'Destination details', select the destination for logging\n10. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Action anyStartWith * and Resource equals * and Effect equals Allow)] exists and (policyArn exists and policyArn does not contain iam::aws:policy/AdministratorAccess)```
AWS IAM policy allows full administrative privileges This policy identifies IAM policies with full administrative privileges. IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant least privilege like granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the IAM dashboard\n3. In the navigation pane, click on Policies and then search for the policy name reported\n4. Select the policy, click on the 'Policy actions', select 'Detach'\n5. Select all Users, Groups, Roles that have this policy attached, Click on 'Detach policy'.
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = state.name contains "stopped" ```
bikram_test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = cors exists and cors.allowOrigins[*] contains "*" and cors.allowMethods[*] contains "*"```
AWS Lambda function URL having overly permissive cross-origin resource sharing permissions This policy identifies AWS Lambda functions which have overly permissive cross-origin resource sharing (CORS) permissions. Overly permissive CORS settings (allowing wildcards) can potentially expose the Lambda function to unwarranted requests and cross-site scripting attacks. It is highly recommended to specify the exact domains (in 'allowOrigins') and HTTP methods (in 'allowMethods') that should be allowed to interact with your function to ensure a secure setup. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To properly configure CORS permissions, refer the following URL:\nhttps://docs.aws.amazon.com/lambda/latest/dg/API_Cors.html.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.publicNetworkAccess'] equal ignore case Enabled and firewallRules[?any(startIpAddress equals "0.0.0.0" and endIpAddress equals "0.0.0.0")] exists```
Azure SQL Server allow access to any Azure internal resources This policy identifies SQL Servers that are configured to allow access to any Azure internal resources. Firewall settings with start IP and end IP both with ‘0.0.0.0’ represents access to all Azure internal network. When this settings is enabled, SQL server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to the 'SQL servers' dashboard\n3. Click on the reported SQL server\n4. Click on 'Networking' under Security\n5. Unselect 'Allow Azure services and resources to access this server' under Exceptions if selected.\n6. Remove any firewall rule which allows access to 0.0.0.0 in startIpAddress and endIpAddress if any.\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isUppercaseCharactersRequired isFalse'```
OCI IAM password policy for local (non-federated) users does not have an uppercase character This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have an uppercase character in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page:https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4. Click Edit Authentication Settings in the middle of the page.\n5. Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 UPPERCASE CHARACTER.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL..
```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equal ignore case "RUNNING" and (machineType contains "machineTypes/n2d-" or machineType contains "machineTypes/c2d-") and (confidentialInstanceConfig.enableConfidentialCompute does not exist or confidentialInstanceConfig.enableConfidentialCompute is false)```
GCP Compute instances with confidential computing disabled This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-nsg-list' AND json.rule = ' $.flowLogsSettings does not exist or $.flowLogsSettings.enabled is false or ($.flowLogsSettings.retentionPolicy.days does not equal 0 and $.flowLogsSettings.retentionPolicy.days less than 90) '```
Azure Network Watcher Network Security Group (NSG) flow logs retention is less than 90 days This policy identifies Azure Network Security Groups (NSG) for which flow logs retention period is 90 days or less. To perform this check, enable this action on the Azure Service Principal: 'Microsoft.Network/networkWatchers/queryFlowLogStatus/action'. NSG flow logs, a feature of the Network Watcher app, enable you to view information about ingress and egress IP traffic through an NSG. The flow logs include information such as: - Outbound and inbound flows on a per-rule basis. - Network interface to which the flow applies. - 5-tuple information about the flow (source/destination IP, source/destination port, protocol). - Whether the traffic was allowed or denied. As a best practice, enable NSG flow logs and set the log retention period to at least 90 days. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Flow Logs:\n\n1. Log in to the Azure portal.\n2. Select 'Network Watcher'.\n3. Select 'NSG flow logs'.\n4. Select the NSG for which you need to modify the flow log settings.\n5. Set the Flow logs 'Status' to 'On'.\n6. Select the destination 'Storage account'.\n7. Set the 'Retention (days)' to 90 days or greater.\n8. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-access-analyzer' AND json.rule = status equals ACTIVE as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus does not equal not-opted-in as Y; filter '$.X.arn contains $.Y.regionName'; show X; count(X) less than 1```
AWS IAM Access analyzer is not configured This policy identifies AWS regions in which the IAM Access analyzer is not configured. AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity and identify unintended access to your resources and data. So it is recommended to configure the Access analyzer in all regions in your account. NOTE: Access Analyzer analyzes only policies that are applied to resources in the same AWS Region that it's enabled in. To monitor all resources in your AWS environment, you must create an analyzer to enable Access Analyzer in each Region where you're using supported AWS resources. For more details: https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the IAM dashboard \n4. Go to 'Access analyzer', from the left panel\n5. Click on the 'Create analyzer' button\n6. On the Create analyzer page, enter the parameters as per your requirements.\n7. Click on the 'Create analyzer'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = kind starts with app and config.minTlsVersion is member of ('1.0', '1.1')```
Azure App Service Web app doesn't use latest TLS version This policy identifies Azure web apps that are not configured with the latest version of TLS encryption. Azure Web Apps provide a platform to host and manage web applications securely. Using the latest TLS version is crucial for maintaining secure connections. Older versions of TLS, such as 1.0 and 1.1, have known vulnerabilities that can be exploited by attackers. Upgrading to newer versions like TLS 1.2 or 1.3 ensures that the web app is better protected against modern security threats. It is highly recommended to use the latest TLS version (greater than 1.1) for secure web app connections. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under "Settings" section, Click on "Configuration"\n5. In "Platform Settings", Set "Minimum Inbound TLS Version" to "1.2" or "1.3"\n6. Click on "Save" icon at the top\n7. Click "Continue" to save the changes.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.createpolicy and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deletepolicy and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updatepolicy) and actions.actions[*].topicId exists' as X; count(X) less than 1```
OCI Event Rule and Notification does not exist for IAM policy changes This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM policy changes. Monitoring and alerting on changes to IAM policies will help in identifying changes to the security posture. It is recommended that an Event Rule and Notification be configured to catch changes made to Identity and Access Management (IAM) policies. NOTE: 1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level. 2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Policy – Change Compartment, Policy – Create, Policy - Delete and Policy – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-kubernetes-cluster' AND json.rule = type equal ignore case openshift and state equal ignore case normal and serviceEndpoints.publicServiceEndpointEnabled is true```
IBM Cloud OpenShift cluster is accessible by using public endpoint This policy identifies IBM Cloud OpenShift clusters which has public service endpoint enabled. If any cluster has public service endpoint enabled, the cluster will be accessible from Internet routable IP address. It is highly recommended to use a private service endpoint instead of a public service endpoint. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: An OpenShift cluster can be made private only at the time of creation. To create a private \nOpenShift cluster follow below URL:\nhttps://cloud.ibm.com/docs/openshift?topic=openshift-cluster-create-vpc-gen2&interface=ui#clusters_vpcg2_ui Please make sure to select 'Private endpoint only' at 'Master service endpoint' section..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = '(resources.applicationLoadBalancer[*] exists or resources.apiGateway[*] exists or resources.other[*] exists) and loggingConfiguration.resourceArn does not exist'```
AWS Web Application Firewall v2 (AWS WAFv2) logging is disabled This policy identifies Web Application Firewall v2s (AWS WAFv2) for which logging is disabled. Enabling WAFv2 logging, logs all web requests inspected by the service which can be used for debugging and additional forensics. The logs will help to understand why certain rules are triggered and why certain web requests are blocked. You can also integrate the logs with any SIEM and log analysis tools for further analysis. It is recommended to enable logging on your Web Application Firewall v2s (WAFv2). For details: https://docs.aws.amazon.com/waf/latest/developerguide/logging.html#logging-management NOTE: Global (CloudFront) WAFv2 resources are out of scope for this policy. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on your reported WAFv2s, follow below mentioned URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/logging.html#logging-management\n\nNOTE: No additional cost to enable logging on AWS WAFv2 (minus Kinesis Firehose and any storage cost).\nFor Kinesis Firehose or any storage additional charges refer https://aws.amazon.com/cloudwatch/pricing/.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(138,138) or destinationPortRanges[*] contains _Port.inRange(138,138) ))] exists```
Azure Network Security Group allows all traffic on NetBIOS (UDP Port 138) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on NetBIOS UDP port 138. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict NetBIOS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and ingressSettings equals ALLOW_ALL```
GCP Cloud Function configured with overly permissive Ingress setting This policy identifies GCP Cloud Functions that are configured with overly permissive Ingress setting. With overly permissive Ingress setting, all inbound requests to the function are allowed, from both the public and resources within the same project. It is recommended to restrict the traffic from the public and other resources, to get better network-based access control and allow traffic from VPC networks in the same project or traffic through the Cloud Load Balancer. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings' drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. In 'Ingress settings', select either 'Allow internal traffic only' or 'Allow internal traffic and traffic from Cloud Load Balancing'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'.
```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' as Y; filter '$.X.description.securityGroups[*] contains $.Y.groupId and $.Y.ipPermissionsEgress[*] is empty'; show X;```
AWS Elastic Load Balancer (ELB) has security group with no outbound rules This policy identifies Elastic Load Balancers (ELB) which have security group with no outbound rules. A security group with no outbound rule will deny all outgoing requests. ELB security groups should have at least one outbound rule, ELB with no outbound permissions will deny all traffic going to any EC2 instances or resources configured behind that ELB; in other words, the ELB is useless without outbound permissions. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Description' tab, click on the security group, it will open Security Group properties in a new tab in your browser\n6. Click on the 'Outbound Rules'\n7. If there are no rules, click on 'Edit rules', add an outbound rule according to your ELB functional requirement\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-target-https-proxies' AND json.rule = 'quicOverride does not contain ENABLE'```
GCP Load balancer HTTPS target proxy is not configured with QUIC protocol This policy identifies Load Balancer HTTPS target proxies which are not configured with QUIC protocol. Enabling QUIC protocol in load balancer target https proxies adds advantage by establishing connections faster, stream-based multiplexing, improved loss recovery, and eliminates head-of-line blocking. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select 'Enabled' from the dropdown for 'QUIC negotiation'\n11. Click on 'Done'\n12. Click on 'Update'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' and json.rule = state .name contains "running"```
Khalid Test Policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals ACTIVE and containerDefinitions[*].privileged exists and containerDefinitions[*].privileged is true```
AWS ECS task definition elevated privileges enabled This policy identifies the ECS containers that are having elevated privileges on the host container instance. When the Privileged parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). Note: This parameter is not supported for Windows containers or tasks using the Fargate launch type. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: Create a task definition revision.\n\n1. Open the Amazon ECS console.\n2. From the navigation bar, choose the region that contains your task definition.\n3. In the navigation pane, choose Task Definitions.\n4. On the Task Definitions page, select the box to the left of the task definition to revise and choose Create new revision.\n5. On the Create new revision of Task Definition page, change the existing Container Definitions.\n6. Under Security, uncheck the Privileged box.\n7. Verify the information and choose Update, then Create.\n8. If your task definition is used in a service, update your service with the updated task definition.\n9. Deactivate previous task definition.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-neptune-db-cluster' AND json.rule = Status contains available and IAMDatabaseAuthenticationEnabled is false```
AWS Neptune Cluster not configured with IAM authentication This policy identifies AWS Neptune clusters that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication ensures the network traffic to and from database clusters is encrypted using Secure Sockets Layer (SSL), provides central access management to your database resources and enforces use of profile credentials instead of a password, for greater security. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable IAM authentication for AWS Neptune cluster follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-enable.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = type equals application and listeners[?any(protocol equals HTTPS and sslPolicy exists and sslPolicy is not member of ('ELBSecurityPolicy-TLS13-1-2-2021-06','ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04'))] exists```
AWS Application Load Balancer (ALB) is not using the latest predefined security policy This policy identifies Application Load Balancers (ALBs) are not using the latest predefined security policy. A security policy is a combination of protocols and ciphers. The protocol establishes a secure connection between a client and a server and ensures that all data passed between the client and your load balancer is private. A cipher is an encryption algorithm that uses encryption keys to create a coded message. So it is recommended to use the latest predefined security policy which uses only secured protocol and ciphers. We recommend using either non-FIPS security policy ELBSecurityPolicy-TLS13-1-2-2021-06 or FIPS security policy ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04 to meet compliance and security standards that require disabling certain TLS protocol versions or to support legacy clients that require deprecated ciphers. For more details: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n\n4. Click on the reported Load Balancer\n\n5. On the 'Listeners' tab, Choose the 'HTTPS' or 'SSL' rule\n\n6. Click on 'Edit Listener' in the 'Manage listener' dropdown, Change 'Security policy' to 'ELBSecurityPolicy-TLS13-1-2-2021-06' (non-FIPS) or 'ELBSecurityPolicy-TLS13-1-2-FIPS-2023-04' (FIPS) to meet compliance and security standards that require disabling certain TLS protocol versions or to support legacy clients that require deprecated ciphers.\n\n7. Click on 'Update' to save your changes.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = zone exists and locations[*] size less than 3```
GCP Kubernetes cluster not in redundant zones Putting resources in different zones in a region provides isolation from many types of infrastructure, hardware, and software failures. This policy alerts if your cluster is not located in at least 3 zones. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Add zones to your zonal cluster.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. From the Additional zones section, select the desired zones.\n4. Click Save..
```config from cloud.resource where api.name = 'aws-iam-list-roles' AND json.rule = role.assumeRolePolicyDocument.Statement[*].Action contains "sts:AssumeRoleWithWebIdentity" and role.assumeRolePolicyDocument.Statement[*].Principal.Federated contains "cognito-identity.amazonaws.com" and role.assumeRolePolicyDocument.Statement[*].Effect contains "Allow" and role.assumeRolePolicyDocument.Statement[*].Condition contains "cognito-identity.amazonaws.com:amr" and role.assumeRolePolicyDocument.Statement[*].Condition contains "unauthenticated" as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Action contains :* and Resource equals * )] exists as Y; filter "($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action contains :* ) or ($.X.attachedPolicies[*].policyArn intersects $.Y.policyArn)"; show X;```
AWS Cognito service role with wide privileges does not validate authentication This policy identifies the AWS Cognito service role that has wide privileges and does not validate user authentication. AWS Cognito is an identity and access management service for web and mobile apps. AWS Cognito service roles define permissions for AWS services accessing resources. The 'amr' field in the service role represents how the user was authenticated. if the user was authenticated using any of the supported providers, the 'amr' will contain 'authenticated' and the name of the provider. Not validating the 'amr' field can allow an unauthenticated user (guest access) with a valid token signed by the identity-pool to assume the Cognito role. If this Cognito role has a '*' wildcard in the action and resource, it could lead to lateral movement or unauthorized access. Ensuring limiting privileges according to business requirements can help in restricting unauthorized access and misuse of resources. It is recommended to limit the Cognito service role used for guest access to not have a '*' wildcard in the action or resource. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: To remove the policy which have excessive permission from the guess access role,\n1. Log in to the AWS console.\n2. Navigate to the IAM service.\n3. Click on Roles.\n4. Click on the reported IAM role.\n5. Under 'Permissions policies' section, remove the policy having excessive permissions and assign a limited permission policy as required for a particular role..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = 'notebookInstanceStatus equals InService and kmsKeyId does not exist'```
AWS SageMaker notebook instance not configured with data encryption at rest using KMS key This policy identifies SageMaker notebook instances that are not configured with data encryption at rest using the AWS managed KMS key. It is recommended to implement encryption at rest in order to protect data from unauthorized entities. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance can not be configured with data encryption at rest once it is created. You need to create a new notebook instance with encryption at rest using the KMS key; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, From the 'Permissions and encryption' section, \nselect the KMS key from the 'Encryption key - optional' dropdown list. If no KMS keys already, you have to create a KMS key first.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu and, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete <notebook-instance-name> dialog box, click the Delete button to confirm the action..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and settings.databaseFlags[?(@.name=='log_min_messages')] does not exist"```
GCP PostgreSQL instance database flag log_min_messages is not set This policy identifies PostgreSQL database instances in which database flag log_min_messages is not set. The log_min_messages flag controls which message levels are written to the server log, valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each level includes all the levels that follow it. log_min_messages flag value changes should only be made in accordance with the organization's logging policy. Auditing helps in troubleshooting operational problems and also permits forensic analysis. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5.Under 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_min_messages' from the drop-down menu and set the value in accordance with your organization's logging policy.\n6. Click Save.
```config from cloud.resource where api.name = 'oci-cloudguard-configuration' AND json.rule = status does not equal ignore case ENABLED```
OCI Cloud Guard is not enabled in the root compartment of the tenancy This policy identifies the absence of OCI Cloud Guard enablement in the root compartment of the tenancy. OCI Cloud Guard is a vital service that detects misconfigured resources and insecure activities within an OCI tenancy. It offers security administrators visibility to identify and resolve these issues promptly. Cloud Guard not only detects but also suggests, assists, or takes corrective actions to mitigate security risks. By enabling Cloud Guard in the root compartment of the tenancy with default configuration, activity detectors, and responders, administrators can proactively monitor and secure their OCI resources against potential security threats. As best practice, it is recommended to have Cloud Guard enabled in the root compartment of your tenancy. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the OCI Cloud Guard setting, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/cloud-guard/using/part-start.htm#cg-access-enable.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = "(policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-TLSv1) or (policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-SSLv3) or (policies[*].policyAttributeDescriptions[?(@.attributeValue=='true')].attributeName equals Protocol-TLSv1.1)"```
AWS Elastic Load Balancer (Classic) SSL negotiation policy configured with vulnerable SSL protocol This policy identifies Elastic Load Balancers (Classic) which are configured with SSL negotiation policy containing vulnerable SSL protocol. The SSL protocol establishes a secure connection between a client and a server and ensures that all the data passed between the client and your load balancer is private. As a security best practice, it is recommended to use the latest version SSL protocol. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Click on the reported Load Balancer\n6. On 'Listeners' tab, Click on 'Edit' button\n7. On 'Edit Listeners' popup for rule 'HTTPS/SSL',\n- If your cipher is 'Predefined Security Policy', change 'Cipher' to 'ELBSecurityPolicy-TLS-1-2-2017-01 or latest'\nOR\n- If your cipher is 'Custom Security Policy', Choose 'Protocol-TLSv1.2' only on 'SSL Protocols' section\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals CosmosDbs and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud set to Off for Cosmos DB This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Cosmos DB set to Off. Enabling Azure Defender for the cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders. It is highly recommended to enable Azure Defender for Cosmos DB. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click 'Select types >' in the row for 'Databases'\n7. Set the radio button next to 'Azure Cosmos DB' to 'On'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' and api.name= 'aws-es-describe-elasticsearch-domain' AND json.rule = serviceSoftwareOptions.updateAvailable exists and serviceSoftwareOptions.updateAvailable is true```
AWS OpenSearch domain does not have the latest service software version This policy identifies Amazon OpenSearch Service domains that have service software updates available but not installed for the domain. Amazon OpenSearch Service is a managed solution for deploying, managing, and scaling OpenSearch clusters. Service software updates deliver the most recent platform fixes, enhancements, and features for the environment, ensuring domain security and availability. To minimize service disruption, it's advisable to schedule updates during periods of low domain traffic. It is recommended to keep OpenSearch regularly updated to maintain system security, while also accessing the latest features and improvements. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To request a service software update for an Amazon OpenSearch Service, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, under 'Analytics', select 'Amazon OpenSearch Service'\n4. Select the reported domain name\n5. Under 'Actions', under 'Service software update', click on 'Update' and select one of the following options:\n\na. Apply update now - Immediately schedules the action to happen in the current hour if there's capacity available. If capacity isn't available, we provide other available time slots to choose from\n\nb. Schedule it in off-peak window - Only available if the off-peak window is enabled for the domain. Schedules the update to take place during the domain's configured off-peak window. There's no guarantee that the update will happen during the next immediate window. Depending on capacity, it might happen in subsequent days\n\nc. Schedule for specific date and time - Schedules the update to take place at a specific date and time. If the time that you specify is unavailable for capacity reasons, you can select a different time slot\n\n6. Choose 'Confirm'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals available and ( BackupRetentionPeriod does not exist or BackupRetentionPeriod less than 7 )```
AWS DocumentDB clusters have backup retention period less than 7 days This policy identifies Amazon DocumentDB clusters lacking sufficient backup retention tenure. Amazon DocumentDB clusters are managed database services on AWS, compatible with MongoDB. They handle tasks like provisioning and backup. With features like automated backups and read replicas, they offer a reliable solution for MongoDB workloads in the cloud. The backup retention period denotes the duration for storing automated backups of the DocumentDB cluster. Inadequate retention periods heighten the risk of data loss, compliance issues, and hinder effective recovery in security breaches or system failures. It is recommended to ensure a backup retention period of at least 7 days or according to your business and compliance requirement. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To modify an Amazon DocumentDB cluster's backup retention period:\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region dropdown in the top right corner where the alert is generated.\n3. Navigate to the Amazon DocumentDB console by either searching for 'Amazon DocumentDB' in the AWS services search bar or directly accessing the Amazon DocumentDB service.\n4. In the navigation pane, choose 'Clusters' and select the cluster name that is reported.\n5. Click 'Actions' in the right corner, and then select 'Modify' from the drop-down menu.\n6. On the Modify cluster page, under the 'Backup' section, select the desired backup retention period in days from the 'Backup retention period' drop-down menu based on your business or compliance requirements.\n7. Click 'Continue' to review a summary of your changes.\n8. Choose either 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your scheduling preference for modifications.\n9. Click on 'Modify Cluster' to implement the changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = "((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false)))) and websiteConfiguration does not exist"```
AWS S3 buckets are accessible to public via ACL This policy identifies S3 buckets which are publicly accessible via ACL. Amazon S3 often used to store highly sensitive enterprise data and allowing public access to such S3 bucket through ACL would result in sensitive data being compromised. It is highly recommended to disable ACL configuration for all S3 buckets and use resource based policies to allow access to S3 buckets. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case Succeeded AND properties.properties.connectivityEndpoints.publicIpAddress exists AND properties.properties.connectivityEndpoints.publicIpAddress does not equal ignore case "null"```
Azure Machine learning compute instance configured with public IP This policy identifies Azure Machine Learning compute instances which are configured with public IP. Configuring an Azure Machine Learning compute instance with a public IP exposes it to significant security risks, including unauthorized access and cyber-attacks. This setup increases the likelihood of data breaches, where sensitive information and intellectual property could be accessed by unauthorized individuals, leading to potential data leakage and loss. As a best practice, it is recommended not to configure Azure Machine Learning instances with public IP. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Disabling a public IP address on an existing Azure Machine Learning compute instance is not supported without deleting and recreating the instance. To secure your instance, it’s recommended to configure it without a public IP from the start. Additionally, to update an existing Azure Machine Learning workspace to use a managed virtual network, all compute resources (including compute instances, compute clusters, and managed online endpoints) must first be deleted.\n\nTo create a new compute instance with no public IP:\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the Azure Machine Learning Workspace that the reported compute instance is associated with\n4. Under 'Settings' go to 'Networking' section\n5. At the top, select the 'Workspace managed outbound access' tab\n6. Select either 'Allow Internet Outbound' or 'Allow Only Approved Outbound' based on your requirements, if one hasn't been chosen already\n7. Click on 'Save'\n8. On the 'Overview' page, click the 'Studio web URL' link to log in to Azure ML Studio\n9. A new tab will open for Azure ML Studio\n10. In the left panel, under 'Manage' section, click on the 'Compute'\n11. Click 'New' to create a new compute instance\n12. In the 'Security' tab, under the 'Virtual network' section, enable the 'No public IP' option to disable the public IP\n13. Select 'Review + Create' to create the compute instance.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ownershipControls.rules[*] does not contain BucketOwnerEnforced```
AWS S3 bucket access control lists (ACLs) in use This policy identifies AWS S3 buckets which are using access control lists (ACLs). ACLs are legacy way to control access to S3 buckets. It is recommended to disable bucket ACL and instead use IAM policies or S3 bucket policies to manage access to your S3 buckets. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions' tab\n5. Under 'Object Ownership' click 'Edit'\n6. Select 'ACLs disabled (recommended)'\n7. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and masterAuthorizedNetworksConfig.enabled does not equal "true"```
GCP Kubernetes Engine Clusters have Master authorized networks disabled This policy identifies Kubernetes Engine Clusters which have disabled Master authorized networks. Enabling Master authorized networks will let the Kubernetes Engine block untrusted non-GCP source IPs from accessing the Kubernetes master through HTTPS. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Follow the below link for enabling Master authorized networks feature on kubernetes clusters,\nLink: https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#add.
```config from cloud.resource where api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = associatePublicIpAddress exists and associatePublicIpAddress is true```
AWS Auto Scaling group launch configuration has public IP address assignment enabled This policy identifies the autoscaling group launch configuration that is configured to assign a public IP address. Auto Scaling groups assign a public IP address to the group's ec2 instances if its associated launch configuration is configured to assign a public IP address. Amazon EC2 instances should only be accessible from behind a load balancer instead of being directly exposed to the internet. It is recommended that the Amazon EC2 instances in an autoscaling group launch configuration do not have an associated public IP address except for limited edge cases. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: An Auto Scaling group is associated with one launch configuration at a time. You cannot modify a launch configuration after you have created it. To change the launch configuration for an Auto Scaling group, You need to use an existing launch configuration as the basis for a new launch configuration first. Then, update the Auto Scaling group to use the new launch configuration before you delete the reported Auto Scaling group configuration.\n\nTo update the Auto Scaling group to use the new launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Copy launch configuration'. This sets up a new launch configuration with the same options as the original, but with 'Copy' added to the name.\n4. On the 'Create launch configuration' page, expand 'Advanced details' under 'Additional Configuration - optional'.\n5. Under the IP address type, choose 'Do not assign a public IP address to any instances'.\n6. When you have finished, click on the 'Create launch configuration' button at the bottom of the page.\n7. On the navigation pane, under Auto Scaling, choose Auto Scaling Groups.\n8. Select the check box next to the Auto Scaling group.\n9. A split pane opens up at the bottom part of the page, showing information about the group that's selected.\n10. On the Details tab, click on the 'Edit' button adjacent to the 'Launch configuration' option.\n11. Under the 'Launch configuration' dropdown, select the newly created launch configuration.\n12. When you have finished changing your launch configuration, click on the 'Update' button at the bottom of the page.\n\nAfter you change the launch configuration for an Auto Scaling group, any new instances are launched with the new configuration options. Existing instances are not affected. To update existing instances, either terminate them so that they are replaced by your Auto Scaling group or allow automatic scaling to gradually replace older instances with newer instances based on your termination policies.\n\nTo delete the reported Auto Scaling group launch configuration follow the steps below:\n\n1. Open the Amazon EC2 console.\n2. On the left navigation pane, under 'Auto Scaling', choose 'Auto Scaling Groups' and Choose 'Launch configurations' near the top of the page.\n3. Select the reported launch configuration and choose Actions, then click 'Delete launch configuration'.\n4. Click on the 'Delete' button to delete the autoscaling group launch configuration..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-virtual-mfa-devices' AND json.rule = 'serialNumber contains root-account-mfa-device and user.arn contains root'```
AWS root account configured with Virtual MFA This policy identifies AWS root accounts which are configured with Virtual MFA. Root is an important role in your account and root accounts must be configured with hardware MFA. Hardware MFA adds extra security because it requires users to type a unique authentication code from an approved authentication device when they access AWS websites or services. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: To manage MFA devices for your AWS account, you must use your root user credentials to sign in to AWS. You cannot manage MFA devices for the root user while signed in with other credentials.\n\n1. Sign in to the AWS Management Console with your root user credentials\n2. Go to IAM\n3. Do one of the following:\nOption 1: Choose Dashboard, and under Security Status, expand Activate MFA on your root account.\nOption 2: On the right side of the navigation bar, select your account name, and then choose My Security Credentials. If necessary, choose Continue to Security Credentials. Then expand the Multi-Factor Authentication (MFA) section on the page.\n4. Choose Manage MFA or Activate MFA, depending on which option you chose in the preceding step.\n5. In the wizard, choose A hardware MFA device and then choose Next Step.\n6. If you have U2F security key as hardware MFA device, choose U2F security key and click on Continue. Next plug the USB U2F security key, when setup is complete click on Close.\nIf you have any other hardware MFA device, choose Other hardware MFA device option\na. In the Serial Number box, type the serial number that is found on the back of the MFA device.\nb. In the Authentication Code 1 box, type the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number.\nc. Wait 30 seconds while the device refreshes the code, and then type the next six-digit number into the Authentication Code 2 box. You might need to press the button on the front of the device again to display the second number.\nd. Choose Next Step. The MFA device is now associated with the AWS account.\n\nImportant:\nSubmit your request immediately after generating the authentication codes. If you generate the codes and then wait too long to submit the request, the MFA device successfully associates with the user but the MFA device becomes out of sync. This happens because time-based one-time passwords (TOTP) expire after a short period of time. If this happens, you can resync the device..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case "Running" AND kind contains "functionapp" AND kind does not contain "workflowapp" AND kind does not equal "app" AND config.minTlsVersion does not equal "1.2"```
Azure Function App doesn't use latest TLS version This policy identifies Azure Function App which are not set with latest version of TLS encryption. Azure currently allows the Function App to set TLS versions 1.0, 1.1 and 1.2. It is highly recommended to use the latest TLS 1.2 version for Function App secure connections. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'TLS/SSL settings'\n5. In 'Protocol Settings', Set 'Minimum TLS Version' to '1.2'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = (securityRules[?any((((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals "all") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))) and (source equals 0.0.0.0/0 and direction equals INGRESS))] exists)```
OCI security group allows unrestricted ingress access to port 22 This policy identifies OCI Security groups that allow unrestricted ingress access to port 22. It is recommended that no security group allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Security Rules\n5. If you want to add a rule, click Add Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```
sailesh of liron's policy #4 This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty```
Azure PostgreSQL servers not configured with private endpoint This policy identifies Azure PostgreSQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure PostgreSQL database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for Postgres servers'\n3. Click on the reported Postgres server instance you want to modify \n4. Select 'Networking' under 'Settings' from left panel \n5. Under 'Private endpoint', click on Add private endpoint' to create a add add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/postgresql/single-server/how-to-configure-privatelink-portal.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and iamPolicy.bindings[?any(members[*] is member of ("allAuthenticatedUsers","allUsers"))] exists```
mkurter clone of GCP Cloud Function is publicly accessible This policy identifies GCP Cloud Functions that are publicly accessible. Allowing 'allusers' / 'allAuthenticatedUsers' to cloud functions can lead to unauthorised invocations of the functions or unwanted access to sensitive information. It is recommended to follow least privileged access policy and grant access restrictively. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to GCP console\n2. Navigate to service 'Cloud Functions'\n3. Click on the function on which the alert is generated\n4. Go to tab 'PERMISSIONS'\n5. Review the roles to see if 'allusers'/'allAuthenticatedUsers' is present\n6. Click on the delete icon to revoke access from 'allusers'/'allAuthenticatedUsers'\n7. On Pop-up select the check box to confirm \n8. Click on 'REMOVE'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-emr-describe-cluster' AND json.rule = 'status.state does not contain TERMINATING and securityConfiguration does not exist'```
AWS EMR cluster is not configured with security configuration This policy identifies EMR clusters which are not configured with security configuration. With Amazon EMR release version 4.8.0 or later, you can use security configurations to configure data encryption, Kerberos authentication, and Amazon S3 authorization for EMRFS. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown\n4. Go to 'Security configurations', click 'Create'\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration\n7. Follow below link to configure a security configuration\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-create-security-configuration.html\n8. Click on 'Create' button\n9. On the left menu of EMR dashboard Click 'Clusters'\n10. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n11. In the Cloning popup, choose 'Yes' and Click 'Clone'\n12. On the Create Cluster page, in the Security Options section, click on 'security configuration'\n13. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n14. Once you the new cluster is set up verify its working and terminate the source cluster in order to stop incurring charges for it\n15. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted\n16. Click on the 'Terminate' button from the top menu\n17. On the 'Terminate clusters' pop-up, click 'Terminate'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = '(lastAccessedDate does not exist and _DateTime.ageInDays(createdDate) > 90) or (lastAccessedDate exists and _DateTime.ageInDays(lastAccessedDate) > 90)'```
AWS Secrets Manager secret not used for more than 90 days This policy identifies the AWS Secrets Manager secret not accessed within 90 days. AWS Secrets Manager securely stores and manages sensitive information like API keys, passwords, and certificates. Leaving unused secrets in AWS Secrets Manager increases the risk of security breaches by providing unnecessary access points for attackers, potentially leading to unauthorized data access or leaks. It is recommended to routinely review and delete unused secrets to reduce the attack surface and potential for unauthorized access. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To delete an unused AWS Secrets Manager secret, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the region from the dropdown in the top right corner where the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Security, Identity, & Compliance', select 'Secrets Manager'\n4. Select the reported Secrets Manager secret\n5. In the Secret details section, choose 'Actions', and then choose 'Delete secret'\n6. In the Disable secret and schedule deletion dialog box, in Waiting period, enter the number of days to wait before the deletion becomes permanent.\n7. Choose 'Schedule deletion'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1434,1434)"```
Alibaba Cloud Security group allow internet traffic to MS SQL Monitor port (1434) This policy identifies Security groups that allow inbound traffic on MS SQL Monitor port (1434) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1434, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-resource-group' AND json.rule = locks.* size equals 0```
Azure Resource Group does not have a resource lock Azure Resource Manager locks provide a way to lock down Azure resources from being deleted or modified. The lock level can be set to either 'CanNotDelete' or 'ReadOnly'. When you apply a lock at a parent scope, all resources within the scope inherit the same lock, and the most restrictive lock takes precedence. This policy identifies Azure Resource Groups that do not have a lock set. As a best practice, place a lock on important resources to prevent accidental or malicious modification or deletion by unauthorized users. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Resource groups' dashboard\n3. Select the resource group that you want to lock\n4. Select 'Locks' under 'Settings' from left panel, then click on 'Add'\n5. Specify the lock name and type\n6. Select on 'OK' to save your changes.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule = 'requireNumbers contains false and requireSymbols contains false and expirePasswords contains false and allowUsersToChangePassword contains false and requireLowercaseCharacters contains false and requireUppercaseCharacters contains false and maxPasswordAge does not exist and passwordReusePrevention does not exist and minimumPasswordLength==6'```
AWS IAM Password policy is unsecure Checks to ensure that IAM password policy is in place for the cloud accounts. As a security best practice, customers must have strong password policies in place. This policy ensures password policies are set with all following options: - Minimum Password Length - At least one Uppercase letter - At least one Lowercase letter - At least one Number - At least one Symbol/non-alphanumeric character - Users have permission to change their own password - Password expiration period - Password reuse - Password expiration requires administrator reset This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'IAM' Service\n2. Click on 'Account Settings'\n3. Under 'Password Policy', select and set all the options\n4. Click on 'Apply password policy'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'listenerPortsAndProtocal[*].listenerProtocal equals http'```
Alibaba Cloud SLB listener that allow connection requests over HTTP This policy identifies Server Load Balancer (SLB) listeners that are configured to accept connection requests over HTTP instead of HTTPS. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the server load balancer. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Once Load balancer listener created we can not modify its protocol. So to resolve this alert, delete the existing HTTP Listener and create a new listener with HTTPS protocol.\n\nTo create a new HTTPS Listener follow:\n1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, click on 'Add Listener'\n5. Select 'Select Listener Protocol' as 'HTTPS' and other parameters as per your requirement.\n6. Click on 'Next' \n7. Choose 'SSL Certificates', 'Backend Servers' and 'Health Check' sections parameters accordingly and Click on 'Submit'\n\nTo delete existing HTTP Listener follow:\n1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, Choose HTTP Listener, Click on 'More' and select 'Remove'\n5. Click on 'OK'.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Icmp or protocol equals *) and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists```
Azure Network Security Group allows all traffic on ICMP (Ping) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on ICMP (Ping) protocol. ICMP is used by devices to communicate error messages and status. While ICMP is useful for diagnostics and troubleshooting, it can also be used to exploit or disrupt systems. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict ICMP (Ping) solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and properties.ipAddress.type exists and properties.ipAddress.type equals Public```
Azure Container Instance is not configured with virtual network This policy identifies Azure Container Instances (ACI) that are not configured with a virtual network. Making container instances public makes an internet routable network. By deploying container instances into an Azure virtual network, your containers can communicate securely with other resources in the virtual network. So it is recommended to configure all your container instances within a virtual network. For more details: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Virtual network can only be configured at the time of container instance creation. Hence, it is suggested to delete an existing container instance having not configured with virtual network and create a new container instance having virtual network configured with secure values.\nNote: Backup or migrate data from the container instance before deleting it.\n\nTo create a Container Instance within a virtual network; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-vnet\n\nTo delete a reported Container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal#clean-up-resources.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-file-system' as X; config from cloud.resource where api.name = 'oci-file-storage-export' AND json.rule = (exportOptions[?any(source equals 0.0.0.0/0 and requirePrivilegedSourcePort is false and access equals READ_WRITE and identitySquash equals NONE)] exists) as Y; filter '($.X.id equals $.Y.fileSystemId)';show X;```
OCI File Storage File System Export is publicly accessible This policy identifies the OCI File Storage File Systems Exports that are publicly accessible. Monitoring and alerting on publicly accessible file systems exports will help in identifying changes to the security posture and thus reduces risk for sensitive data being leaked. It is recommended that no File System exports be publicly accessible. FMI : https://docs.cloud.oracle.com/en-us/iaas/Content/File/Tasks/exportoptions.htm#scenarios This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the alerted Export Path from the list of Exports\n5. Click on the Edit NFS Export Options\n6. Edit the export options to make it more restrictive\n7. Click Update.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(23,23) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on Telnet port (23) This policy identifies GCP Firewall rules which allow all inbound traffic on Telnet port (23). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the Telnet port (23) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = lifecyclePolicy does not exist```
AWS ECR Repository not configured with a lifecycle policy This policy identifies AWS ECR Repositories that are not configured with a lifecycle policy. Amazon ECR lifecycle policies enable you to specify the lifecycle management of images in a repository. This helps to automate the cleanup of unused images and the expiration of images based on age or count. As best practice, it is recommended to configure ECR repository with lifecycle policy which helps to avoid unintentionally using outdated images in your repository. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AWS ECR Repository with a lifecycle policy follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/AmazonECR/latest/userguide/lpp_creation.html.
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = clientToken is not empty AND monitoring.state contains "running"```
vv15_2 This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'oci-analytics-instance' AND json.rule = lifecycleState equal ignore case ACTIVE AND networkEndpointDetails.networkEndpointType equal ignore case PUBLIC AND (networkEndpointDetails.whitelistedServices is empty AND networkEndpointDetails.whitelistedIps is empty AND networkEndpointDetails.whitelistedVcns is empty)```
OCI Oracle Analytics Cloud (OAC) access is not restricted to allowed sources or deployed within a Virtual Cloud Network This policy identifies Oracle Analytics Cloud (OAC) instances that are not restricted to specific sources or not deployed within a Virtual Cloud Network (VCN). OAC is a scalable service for enterprise analytics, and restricting its access to corporate IP addresses or VCNs enhances security by reducing exposure to unauthorized access. Deploying OAC instances within a VCN and implementing access control rules is essential for protecting sensitive data. This ensures that only authorized sources can connect to OAC, mitigating risks and maintaining data integrity. As best practice, it is recommended to have new OAC instances deployed within a VCN, and existing instances should have access control rules configured to allow only approved sources. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To configure the OCI Oracle Analytics Cloud (OAC) access, refer to the following documentation:\nhttps://docs.oracle.com/en-us/iaas/analytics-cloud/doc/manage-service-access-and-security.html#ACOCI-GUID-08739F8B-13EC-4194-8EEF-58664F2C1178.
```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-sql-instances-list' AND json.rule = state equals "RUNNABLE" and diskEncryptionConfiguration.kmsKeyName does not exist```
GCP SQL Instance not encrypted with CMEK This policy identifies GCP SQL Instances that are not encrypted with Customer Managed Encryption Keys (CMEK). Using CMEK for SQL Instances provides greater control over data at rest encryption by allowing key rotation and revocation, which enhances security and helps meet compliance requirements. Encrypting SQL Instances with CMEK ensures better data privacy management. It is recommended to use CMEK for SQL Instance encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP SQL Instance encryption cannot be changed after creation. To make use of CMEK a new SQL Instance can be created.\n\nTo create a new SQL Instance with CMEK, please follow the steps below:\n1. Login to the GCP console\n2. Navigate to the 'SQL' service\n3. Click 'CREATE INSTANCE'\n4. Select the database engine\n5. Under 'Customize your instance', expand 'SHOW CONFIGURATION OPTIONS'\n6. Expand 'STORAGE'\n7. Expand 'ADVANCED ENCRYPTION OPTIONS'\n8. Select 'Cloud KMS key'\n9. Select the appropriate 'Key type' and then select the required CMEK\n10. Configure the rest of the SQL instance as required\n11. Click 'CREATE INSTANCE' at the bottom of the page.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = "acl.grantsAsList[?(@.grantee=='AllUsers')].permission contains ReadAcp or acl.grantsAsList[?(@.grantee=='AllUsers')].permission contains FullControl"```
AWS S3 bucket has global view ACL permissions enabled This policy determines if any S3 bucket(s) has Global View ACL permissions enabled for the All Users group. These permissions allow external resources to see the permission settings associated to the object. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Go to the AWS console S3 dashboard.\n2. Select your bucket by clicking on the bucket name.\n3. Select the Permissions tab and 'Access Control List.'\n4. Under Public Access, select Everyone.\n5. In the popup window, under Access to this bucket's ACL, uncheck 'Read bucket permissions' and Save..
```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-ec2-describe-route-tables' AND json.rule = associations[*].subnetId exists and routes[?any( state equals active and gatewayId starts with igw- and (destinationCidrBlock equals "0.0.0.0/0" or destinationIpv6CidrBlock equals "::/0"))] exists as Y; filter '$.X.dbsubnetGroup.subnets[*].subnetIdentifier intersects $.Y.associations[*].subnetId'; show X;```
AWS RDS instance not in private subnet This policy identifies AWS RDS instance which are not in a private subnet. RDS should not be deployed in a public subnet, production databases should be located behind a DMZ in a private subnet with limited access in most scenarios. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To resolve this alert, you should redeploy RDS into a private RDS Subnet group.\n\nNote: You can not move an existing RDS instance from one subnet to another.\n\nCreate a RDS Subnet group:\n\nA DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances.\n\n1. Open the Amazon RDS console\n2. In the navigation pane, choose 'Subnet groups'\n3. Choose 'Create DB Subnet Group'\n4. Type the 'Name' of your DB subnet group\n5. Add a 'Description' for your DB subnet group\n6. Choose your 'VPC'\n7. Choose 'Availability Zones'\n8. In the Add subnets section, add your Private subnets related to this VPC\n9. Choose Create\n\nWhen creating your RDS DB, under Configure advanced settings, choose the Subnet group created above..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = status equal ignore case "available" and snapshotRetentionLimit does not exist or snapshotRetentionLimit < 1```
AWS ElastiCache Redis cluster is not configured with automatic backup This policy identifies Amazon ElastiCache Redis clusters where automatic backup is disabled by checking if SnapshotRetentionLimit is less than 1. Amazon ElastiCache for Redis clusters can back up their data. Automatic backups in ElastiCache Redis clusters ensure data durability and enable point-in-time recovery, protecting against data loss or corruption. Without backups, data loss from breaches or corruption could be irreversible, compromising data integrity and availability. It is recommended to enable automatic backups to adhere to compliance requirements and enhance security measures, ensuring data integrity and resilience against potential threats. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on 'Redis caches' under the 'Resources' section\n5. Select reported Redis cluster\n6. Click on 'Modify' button\n7. In the 'Modify Cluster' dialog box, Under the 'Backup' section \na. Select 'Enable automatic backups'\nb. Select the 'Backup node ID' that is used as the daily backup source for the cluster\nc. Select the 'Backup retention period' number of days according to your buissness requirements for which automated backups are retained before they're automatically deleted\nd. select the 'Backup start time' and 'Backup duration' according to your requirements\n\n8. Click on 'Preview Changes'\n9. Select Yes checkbox under 'Apply Immediately' , to apply the configuration changes immediately. If Apply Immediately is not selected, the changes will be processed during the next maintenance window.\n10. Click on 'Modify'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = listeners[?any(sslPolicy contains ELBSecurityPolicy-TLS-1-0-2015-04)] exists```
AWS Elastic Load Balancer v2 (ELBv2) SSL negotiation policy configured with weak ciphers This policy identifies Elastic Load Balancers v2 (ELBv2) which are configured with SSL negotiation policy containing weak ciphers. An SSL cipher is an encryption algorithm that uses encryption keys to create a coded message. SSL protocols use several SSL ciphers to encrypt data over the Internet. As many of the other ciphers are not secure/weak, it is recommended to use only the ciphers recommended in the following AWS link: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the EC2 Dashboard, and select 'Load Balancers'\n4. Click on the reported Load Balancer\n5. On the 'Listeners' tab, Choose the 'HTTPS' or 'SSL' rule; Click on 'Edit', Change 'Security policy' to other than 'ELBSecurityPolicy-TLS-1-0-2015-04' as it contains DES-CBC3-SHA cipher, which is a weak cipher.\n6. Click on 'Update' to save your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```
liron's policy #4 This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-list-streams' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.Y.keyMetadata.keyManager == AWS and $.Y.key.keyArn == $.X.keyId and $.X.encryptionType equals KMS'; show X;```
AWS Kinesis streams encryption using default KMS keys instead of Customer's Managed Master Keys This policy identifies the AWS Kinesis streams which are encrypted with default KMS keys and not with Master Keys managed by Customer. It is a best practice to use customer managed Master Keys to encrypt your Amazon Kinesis streams data. It gives you full control over the encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Kinesis Service\n3. Select the reported Kinesis data stream for the corresponding region\n4. Under Server-side encryption, Click on Edit\n5. Choose Enabled\n6. Under KMS master key, You can choose any KMS other than the default (Default) aws/kinesis\n7. Click Save.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'loggingService does not exist or loggingService equals none'```
GCP Kubernetes Engine Clusters have Cloud Logging disabled This policy identifies Kubernetes Engine Clusters which have disabled Cloud Logging. Enabling Cloud Logging will let the Kubernetes Engine to collect, process, and store your container and system logs in a dedicated persistent data store. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to 'Kubernetes Engine' (Left Panel)\n3. Select 'Clusters'\n4. From the list of clusters, click on the reported cluster\n5. Under 'Features', click on the edit button (pencil icon) in front of 'Cloud Logging'\n6. In the 'Edit Cloud Logging' dialog, enable the 'Enable Cloud Logging' checkbox\n7. Select components to be logged\n8. Click on 'Save Changes'.
```config from cloud.resource where api.name = 'aws-rds-db-cluster' AND json.rule = engine equals "aurora-mysql" and status equals "available" as X; config from cloud.resource where api.name = 'aws-rds-db-cluster-parameter-group' AND json.rule = DBParameterGroupFamily contains "aurora-mysql" as Y; filter '$.X.dBclusterParameterGroupArn equals $.Y.DBClusterParameterGroupArn and (($.Y.parameters.server_audit_logging.ParameterValue does not exist or $.Y.parameters.server_audit_logging.ParameterValue equals 0) or ($.X.enabledCloudwatchLogsExports does not contain "audit" and $.Y.parameters.server_audit_logs_upload.ParameterValue equals 0))' ; show X;```
AWS Aurora MySQL DB cluster does not publish audit logs to CloudWatch Logs This policy identifies AWS Aurora MySQL DB cluster where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. Aurora MySQL DB cluster integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While the Aurora MySQL DB cluster provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the Aurora MySQL DB cluster to enable audit logs and publish audit logs to CloudWatch This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Parameter groups'\n5. Choose 'Create parameter group'\n6. The Create parameter group window appears\n\n 6a. In the 'Parameter group name' box, enter the name of the new DB cluster parameter group.\n 6b. In the 'Description' box, enter a description for the new DB cluster parameter group.\n 6c. In the 'Engine type' drop-down, select the engine type (Aurora MySQL)\n 6d. In the 'Parameter group family' list, select a DB parameter group family\n 6e. In the Type list, select 'DB cluster Parameter Group'.\n\n7. Choose 'Create'\n\nTo modify the custom DB cluster parameter group to enable audit logging, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Parameter groups'\n5. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify.\n6. Choose 'Actions', and then choose 'Edit' to modify your Parameter group. \n7. Change the value of the 'server_audit_logging' parameter to '1' in the value drop-down and click 'Save Changes' for enabling audit logs.\n\nTo modify an AWS Aurora MySQL DB Cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Databases'\n5. Choose the reported cluster that you want to associate your parameter group with. Choose 'Modify' to modify your cluster \n6. Under 'Additional configuration', select the above-created cluster parameter group from the 'DB cluster parameter group' dropdown\n7. Choose 'Continue' and check the summary of modifications\n8. Under the 'Schedule modifications' section, select 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your requirements for when to apply modifications\n9. Choose 'Modify cluster' to save your changes\n\nTo modify an AWS Aurora MySQL DB Cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. In the Navigation Panel on the left, Select 'All services' and under 'Database', select 'RDS'\n4. In the navigation pane, choose 'Databases'\n5. Choose the reported cluster that you want to associate your parameter group with. Choose 'Modify' to modify your cluster\n6. In the 'Log exports' section, choose the 'Audit log' to start publishing to CloudWatch Logs\n7. Choose 'Continue' and check the summary of modifications\n8. Under the 'Schedule modifications' section, select 'Apply during the next scheduled maintenance window' or 'Apply immediately' based on your requirements for when to apply modifications\n9. Choose 'Modify cluster' to save your changes.
```config from cloud.resource where api.name = 'gcloud-bigquery-table' AND json.rule = encryptionConfiguration.kmsKeyName does not exist```
GCP BigQuery Table not encrypted with CMEK This policy identifies GCP BigQuery tables that are not encrypted with Customer Managed Encryption Keys (CMEK). CMEK for BigQuery tables provides control over the encryption of data at rest. Encrypting BigQuery tables with CMEK enhances security by giving you full control over encryption keys. This ensures data protection, especially for sensitive models and predictions. CMEK allows key rotation and revocation, aligning with compliance requirements and offering better data privacy management. It is recommended to use CMEK for BigQuery table encryption. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure a Customer-managed encryption key (CMEK) for BigQuery Table, use following command for "bq" utility\nbq cp -f --destination_kms_key <CMEK> <DATASET_ID.TABLE_ID DATASET_ID.TABLE_ID>\n\nPlease refer to URL mentioned below for more details on how to change table from default encryption to CMEK encryption:\nhttps://cloud.google.com/bigquery/docs/customer-managed-encryption#change_to_kms\n\nPlease refer to URL mentioned below for more details on the bq update command:\nhttps://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_cp.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and sqlEncryptionProtectors[*].kind does not exist```
Azure SQL server Transparent Data Encryption (TDE) encryption disabled This policy identifies SQL servers in which Transparent Data Encryption (TDE) is disabled. TDE encryption performs real-time encryption and decryption of the server, related reinforcements, and exchange log records without requiring any changes to the application. It is recommended to have TDE encryption on your SQL servers to protect the server from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'SQL servers'\n3. Select the reported SQL server instance you want to modify\n4. Select 'Transparent data encryption' under 'Security'\n5. Select 'Select a key'\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(21,21)"```
Alibaba Cloud Security group allow internet traffic to FTP port (21) This policy identifies Security groups that allow inbound traffic on FTP port (21) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 21, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kinesis-list-streams' AND json.rule = 'encryptionType equals NONE or encryptionType does not exist'```
AWS Kinesis streams are not encrypted using Server Side Encryption This Policy identifies the AWS Kinesis streams which are not encrypted using Server Side Encryption. Server Side Encryption is used to encrypt your sensitive data before it is written to the Kinesis stream storage layer and decrypted after it is retrieved from storage. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to Kinesis Service\n3. Select the reported Kinesis data stream for the corresponding region\n4. Under Server-side encryption, Click on Edit\n5. Choose Enabled\n6. Under KMS master key, You can choose any KMS other than the default (Default) aws/kinesis\n7. Click Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and (acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists or acl.grantsAsList[?any(grantee equals AuthenticatedUsers and permission is member of (ReadAcp,Read,FullControl))] exists)) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```
AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Under ''Access Control List'', Click on ''Authenticated users group'' and uncheck all items\nc. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = networkInterfaces[*].association.publicIp exists```
AWS EC2 instance is assigned with public IP This policy identifies the AWS EC2 instance having a public IP address assigned. AWS EC2 instances with public IPs are virtual servers hosted in the Amazon Web Services (AWS) cloud that can be accessed over the internet. Public IPs increase an EC2 instance's attack surface, necessitating robust security configurations to prevent unauthorized access and attacks. It is recommended to use private IPv4 addresses for communication between EC2 instances and disassociate the public IP address from an instance or disable auto-assign public IP addresses in the subnet. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: In a default VPC, instances get a public IP address. In a non-default VPC, the subnet configuration determines this.\n\nYou can't manually change an automatically-assigned public IP. To control public IP assignment:\n\nTo unassign the IP addresses associated with a network interface, follow the instructions here: \n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#managing-network-interface-ip-addresses\n\nNote: If you specify an existing network interface for eth0 (the primary network interface), you can't change its public IP address settings using the auto-assign public IP feature; the subnet settings will take precedence.\n\nModify the subnet's public IP addressing attribute by following these actions: \n\n https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-public-ip\n\nIf you are using an Elastic IP, the instance is internet-reachable. To disassociate an Elastic IP, follow these actions: \n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#using-instance-addressing-eips-associating-different.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-keys-list' AND json.rule = 'disabled is false and name contains iam.gserviceaccount.com and (_DateTime.ageInDays($.validAfterTime) > 90) and keyType equals USER_MANAGED'```
GCP User managed service account keys are not rotated for 90 days This policy identifies user-managed service account keys which are not rotated from last 90 days or more. Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen. It is recommended that all user-managed service account keys are regularly rotated. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: To fix this alert, delete the old key which is older than 90 or more days, and Create a new key for that particular service account.To Delete user-managed Service Account Key older than 90 days:\n\n1. Login to GCP Portal\n2. Go to APIs & Services (Left Panel)\n3. Select 'Credentials' and Under section 'Service Accounts', select the service account for which we need to delete the key\n4. On the page 'Service account details' select the tab 'KEYS'\n5. Click on the delete icon for the listed key after confirming the creation date is older than 90\n\nTo Create a new user-managed Service Account Key for a Service Account:\n1. Login to GCP Portal\n2. Go to APIs & Services (Left Panel)\n3. Select 'Credentials' and Under the section 'Service Accounts', select the service account for which we need a key\n4. On the page 'Service account details' select the tab 'KEYS'\n5. Under 'ADD KEY' dropdown, select 'Create new key'\n6. Select desired key type format among JSON or P12\n7. Click on CREATE button, It will download the private key. Keep it safe.\n8. Click on CLOSE if promptedIt will redirect to the APIs & Services Credentials page. Make a note of the New ID displayed in the section Service account keys with the new creation date.\n\nNOTE: Rotating the service account key might break communication for depending applications. Dependent applications need to configure manually with a new key id..
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "sysdig-monitor" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance","sysdigTeam"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Cloud Monitoring Service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for IBM Cloud Monitoring service. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, and click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Monitoring' Service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = ( ( publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist ) or ( publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false ) or ( publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false ) or ( publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist ) )AND policy.Statement[?any(Effect equals Allow and Action anyStartWith s3: and (Principal.AWS contains * or Principal equals *) and (Condition does not exist or Condition[*] is empty) )] exists```
AWS S3 bucket policy overly permissive to any principal This policy identifies the S3 buckets that have a bucket policy overly permissive to any principal and do not have Block public and cross-account access to buckets and objects through any public bucket or access point policies enabled. It is recommended to follow the principle of least privileges ensuring that the only restricted entities have permission on S3 operations instead of any anonymous. For more details: https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-bucket-user-policy-specifying-principal-intro.html This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, click on the 'Bucket Policy'\n5. Update the S3 bucket policy, by removing Principal conatining wildcard(*) to specific accounts, Services or IAM entities. Also restrict S3 action operations to specific instead of using wildcard (*).\n6. In the 'Permissions' tab, click on the 'Block public access' and enable 'Block public and cross-account access to buckets and objects through any public bucket or access point policies'.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster-nodepool' AND json.rule = lifecycleState equal ignore case ACTIVE and (nodeConfigDetails.isPvEncryptionInTransitEnabled equal ignore case "null" or nodeConfigDetails.isPvEncryptionInTransitEnabled does not exist)```
OCI Kubernetes Engine Cluster boot volume is not configured with in-transit data encryption This policy identifies Kubernetes Engine Clusters that are not configured with in-transit data encryption. Configuring In-transit encryption on clusters boot volumes, encrypts data in transit between the instance, the boot volume, and the block volumes. All the data moving between the instance and the block volume is transferred over an internal and highly secure network. It is recommended that Clusters boot volumes should be configured with in-transit data encryption to minimize risk for sensitive data being leaked. For more details: https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/overview.htm#BlockVolumeEncryption This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> Kubernetes Clusters (OKE)\n3. Click on the Kubernetes Clusters you wanted to modify\n4. Click on 'Node pools'\n5. Click on the reported node pool\n6. On the 'Node pool details' page, click on the 'Edit' button.\n7. On the 'Edit node pool' dialog; under 'Boot volume' section, select 'Use in-transit encryption' option\n8. Click on the 'Save Changes' button..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-global-forwarding-rule' AND json.rule = globalForwardingRules[?any( target contains "/targetHttpProxies/" and loadBalancingScheme contains "EXTERNAL" )] exists```
GCP public-facing (external) global load balancer using HTTP protocol This policy identifies GCP public-facing (external) global load balancers that are using HTTP protocol. Using the HTTP protocol with a GCP external load balancer transmits data in plaintext, making it vulnerable to eavesdropping, interception, and modification by malicious actors. This lack of encryption exposes sensitive information, increases the risk of man-in-the-middle attacks, and compromises the overall security and privacy of the data exchanged between clients and servers. It is recommended to use HTTPS protocol with external-facing load balancers. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Network Service' and then 'Load Balancing'\n3. Click on the 'FRONTENDS' tab\n4. Identify the frontend that is using the reported forwarding rule.\n5. Click on the load balancer name associated with the frontend identified above\n6. Click 'Edit'\n7. Go to 'Frontend configuration'\n8. Delete the frontend rule that allows HTTP protocol.\n9. Add new frontend rule(s) as required. Make sure to use HTTPS protocol instead of HTTP for new rules.\n10. Click 'Update'\n11. Click 'UPDATE LOAD BALANCER' in the pop-up..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```
dnd_test_create_hyperion_policy_ss_update_child_policy_finding_2 Description-30540d9e-e2ce-4d22-a7df-a5b42c08f155 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = arn contains "E2PTZRGF0OBZQJ" and tags[*].key contains "test"```
eai_test_policy_demo EAI Demo policy This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' and json.rule = keys[?any(attributes.exp equals -1 and attributes.enabled contains true)] exists and properties.enableRbacAuthorization is false```
Azure Key Vault Key has no expiration date (Non-RBAC Key vault) This policy identifies Azure Key Vault keys that do not have an expiration date for the Non-RBAC Key vaults. As a best practice, set an expiration date for each key and rotate your keys regularly. Before you activate this policy, ensure that you have added the Prisma Cloud Service Principal to each Key Vault: https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin/connect-your-cloud-platform-to-prisma-cloud/onboard-your-azure-account/azure-onboarding-checklist Alternatively, run the following command on the Azure cloud shell: az keyvault list | jq '.[].name' | xargs -I {} az keyvault set-policy --name {} --certificate-permissions list listissuers --key-permissions list --secret-permissions list --spn <prismacloud_app_id> This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Key vaults'\n3. Select the Key vault where the key is stored\n4. Select 'Keys', and select the key that you need to modify\n5. Select the current version\n6. Set the expiration date\n7. 'Save' your changes.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equals Ready and properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```
Azure MariaDB database server not using latest TLS version This policy identifies Azure MariaDB database servers that are not using the latest TLS version for SSL enforcement. Azure Database for MariaDB uses Transport Layer Security (TLS) from communication with client applications. As a best security practice, use the newer TLS version as the minimum TLS version for the MariaDB database server. Currently, Azure MariaDB supports TLS 1.2 version which resolves the security gap from its preceding versions. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure SSL connection with latest TLS version on an existing Azure Database for MariaDB, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/mariadb/howto-tls-configurations\n\nNOTE: Once you enforce a minimum TLS version, you cannot later disable minimum version enforcement..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-batch-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity does not exist or identity.type equal ignore case "None"```
Azure Batch account is not configured with managed identity This policy identifies Batch accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Batch account. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Batch accounts'\n3. Click on the reported Batch account\n4. Select 'Identity' under 'Settings' from left panel \n5. Configure either 'System assigned' or 'User assigned' identity\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(5432,5432) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on PostgreSQL port (5432) This policy identifies GCP Firewall rules which allow all inbound traffic on PostgreSQL port (5432). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the PostgreSQL port (5432) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'hardExpiry is false'```
Alibaba Cloud RAM password policy configured to allow login after the password expires This policy identifies Alibaba Cloud accounts that are configured to allow login after the password has expired. As a best practice, denying login after the password expires allows you to ensure that RAM users reset their password before they can access the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Action After Password Expires' field, select 'Deny Logon' radio button\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'transitEncryptionEnabled is false or transitEncryptionEnabled does not exist'```
AWS ElastiCache Redis cluster with in-transit encryption disabled (Replication group) This policy identifies ElastiCache Redis clusters that are replication groups and have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your Redis clusters and their associated cache storage systems. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster in-transit encryption can be set, only at the time of creation of the cluster. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'..
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = ingressSecurityRules[*] size equals 0```
OCI VCN has no inbound security list This policy identifies the OCI Virtual Cloud Networks (VCN) that lack ingress rules configured in their security lists. It is recommended that Virtual Cloud Networks (VCN) security lists are configured with ingress rules which provide stateful and stateless firewall capability to control network access to your instances. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Ingress rules\n5. Click on Add Ingress Rules (To add ingress rules appropriately in the pop up)\n6. Click on Add Ingress Rules.
```config from cloud.resource where api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = status equals available and atRestEncryptionEnabled is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '($.X.kmsKeyId does not exist) or ($.X.kmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled) and $.X.kmsKeyId equals $.Y.keyMetadata.arn'; show X;```
AWS ElastiCache Redis cluster encryption not configured with CMK key This policy identifies ElastiCache Redis clusters that are encrypted using the default KMS key instead of Customer Managed CMK (Customer Master Key) or CMK key used for encryption is disabled. As a security best practice enabled CMK should be used instead of the default KMS key for encryption to gain the ability to rotate the key according to your own policies, delete the key, and control access to the key via KMS policies and IAM policies. For details: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html#using-customer-managed-keys-for-elasticache-security This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To encrypt your ElastiCache Redis cluster with CMK follow below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/at-rest-encryption.html#at-reset-encryption-enable-existing-cluster.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-vnet-list' AND json.rule = ['properties.virtualNetworkPeerings'][*].['properties.peeringState'] equals "Disconnected"```
Azure virtual network peer is disconnected Virtual network peering enables you to connect two Azure virtual networks so that the resources in these networks are directly connected. This policy identifies Azure virtual network peers that are disconnected. Typically, the disconnection happens when a peering configuration is deleted on one virtual network, and the other virtual network reports the peering status as disconnected. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To reconnect the virtual network peers, you need to delete the 'Disconnected' peering connection and re-configure the peering connection.\n\nTo re-configure the peering connection:\n1. Log in to the Azure Portal.\n2. Select 'Virtual Networks', and select the virtual network on which has 'Disconnected' peering.\n3. Select 'Peerings'.\n4. Delete the peering with 'Disconnected' status.\n5. Select 'Add' to re-initiate the peering configuration.\n6. Specify the 'Name' and target 'Virtual Network'.\n7. Select 'OK'\n8. Verify that peering state is 'Initiated'.\n9. Repeat step 5-7 on the target/other vnet.\n10. Verify that the peering state is 'Connected'.
```config from cloud.resource where api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals "available" as X; config from cloud.resource where api.name = 'aws-docdb-db-cluster-parameter-group' AND json.rule = parameters.audit_logs.ParameterValue is member of ( 'disabled','none') as Y; filter '($.X.EnabledCloudwatchLogsExports.member does not contain "audit") or $.X.DBClusterParameterGroup equals $.Y.DBClusterParameterGroupName' ; show X;```
AWS DocumentDB cluster does not publish audit logs to CloudWatch Logs This policy identifies the Amazon DocumentDB cluster where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. DocumentDB integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While DocumentDB provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the DocumentDB cluster to enable audit logs and publish audit logs to CloudWatch logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. Choose 'Create'. The 'Create cluster parameter group' window appears. \n4. In the 'New cluster parameter group name', enter the name of the new DB cluster parameter group. \n5. In the 'Family' list, select a 'DB parameter group family'. \n6. In the Description box, enter a description for the new DB cluster parameter group. \n7. Click 'Create'. \n\nTo modify the custom DB cluster parameter group to enable query logging, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify. \n4. Click on the 'audit_logs' parameter and click 'Edit'. \n5. Change the value of the 'audit_logs' parameter to any value (ddl,dml_read,dml_write, all) other than 'disabled' or 'none' you want to modify according to your requirements. \n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements. \n7. Choose 'Modify cluster parameter' to modify the values. \n\nTo modify an AWS DocumentDB cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster. \n4. Scroll down to 'Cluster options', select the above-created cluster parameter group from the DB parameter group dropdown. \n5. Choose 'Continue' and check the summary of modifications. \n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements. \n7. On the confirmation page, review your changes. If they are correct, choose 'Modify cluster' to save your changes. \n\nWhen the value of the audit_logs cluster parameter is enabled, ddl, dml_read, or dml_write, you must also enable Amazon DocumentDB to export logs to Amazon CloudWatch. If you omit either of these steps, audit logs will not be sent to CloudWatch. \n\nTo modify an Amazon DocumentDB cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon DocumentDB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster.\n4. Scroll down to the Log exports section, and choose 'Enable' for the 'Audit logs'.\n5. Choose 'Continue'.\n6. Choose 'Apply immediately' to apply the changes immediately or 'Apply during the next scheduled maintenance window' according to your requirements.\n7. Choose 'Modify cluster'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = status equals "ISSUED" and keyAlgorithm starts with "RSA-" and keyAlgorithm equals RSA-1024```
AWS Certificate Manager (ACM) RSA certificate key length less than 2048 This policy identifies the RSA certificates managed by AWS Certificate Manager with a key length of less than 2048 bits. AWS Certificate Manager (ACM) is a service for managing SSL/TLS certificates. RSA certificates are cryptographic keys used for securing communications over networks. Shorter key lengths may be susceptible to attacks such as brute force or factorization, where an attacker could potentially decrypt the encrypted data by finding the prime factors of the key. It is recommended that the RSA certificates imported on ACM utilise a minimum key length of 2048 bits or greater to ensure a sufficient level of security. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: You can't change the key length after importing a certificate. Instead, you must delete certificates with a key length smaller than 2,048 bits, and then the new RSA certificate should be imported with the desired key length.\n\nTo import the new certificate, Please refer to the below url\nhttps://docs.aws.amazon.com/acm/latest/userguide/import-certificate-api-cli.html#import-certificate-api\n\nTo delete the reported ACM RSA certificate, Please refer to the below url\n\nhttps://docs.aws.amazon.com/acm/latest/userguide/gs-acm-delete.html.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(22,22) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
PCSUP-22411 - policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.