Skip to main content

SOC Prime Platform Product Release Notes 5.13.2

S
Written by Sergey Bayrachny

September 4, 2024

© 2024 SOC Prime Inc.

All rights reserved. This product and documentation related are protected by copyright and distributed under licenses restricting their use, copying, distribution, and decompilation. No part of this product or documentation related may be reproduced in any form or by any means without the prior written authorization of SOC Prime. While every precaution has been taken in the preparation of this book, SOC Prime assumes no responsibility for errors or omissions. This publication and features described herein are subject to change without notice.

Support for CEF


We've added the CEF data schema:

  • In Threat Detection Marketplace as an alternative translation option for the following platforms:

    • Elastic Stack SavedSearch

    • Elastic Stack Watcher

    • Elastic Stack Query (EQL)

    • Elastic Stack Query (Lucene)

    • Elastic Stack Detection Rule (EQL)

    • Elastic Stack Detection Rule (Lucene)

    • ElastAlert Alert

  • in Uncoder AI as an option for the translation of Sigma rules into the following target platforms:

    • Elastic Stack Kibana SavedSearch (JSON)

    • Elastic Stack Kibana SavedSearch (NDJSON)

    • Elastic Stack Rule (Watcher)

    • Elastic Stack Query (DSL)

    • Elastic Stack Query (EQL)

    • Elastic Stack Query (Lucene)

    • Elastic Stack Detection Rule (EQL)

    • Elastic Stack Detection Rule (Lucene)

    • ElastAlert Alert (Lucene)

    • ElastAlert Alert (DSL)

    • LogRhythm LR7 Query (Lucene)

    • NVISO EE-Outliers Query

    • AWS OpenSearch Rule (JSON)

Threat Detection Marketplace


Azure DevOps Integration

We've added integration with AzureDevops so that users can push detection content to their Azure DevOps repository instead of directly deploying into your SIEM, integrating SOC Prime Continuous Content Delivery as part of their CI/CD flow.

You can push content to your repository via SOC Prime Automation capabilities by creating a Job to deploy selected content to your repository.

For an Azure DevOps Integration to become available in the Data Plane field within the Job settings, first select the Platform and Content Types that match the values that were configured in the Content Platform field during the Integration setup.

The Azure DevOps integration supports the following content formats:

  • Microsoft Sentinel Rule

  • Microsoft Sentinel Query

  • Elastic Detection Rule

  • Elastic Watcher

  • Elastic Saved Search

  • Chronicle Security Rule

  • Falcon LogScale Alert

  • Splunk Alert

  • Sumo Logic Query

  • LimaCharlie

To configure integration, follow these steps:

  1. Click Add Integration on the Account > Platform Settings > Integrations page.

  2. Name your profile, select Azure DevOps as your platform, and choose if you want to share the profile with your teammates. A shared Integration will be available to use, view, and edit for all users in your organization.

  3. Ensure that the checkbox next to Automation and direct deployment from a Sigma rule page is set.

  4. Fill in the fields in the Configuration section:

    • Repository: Provide the path to your repository that includes the name of your organization and the name of your repository

    • Personal Access Token: Provide your personal access token. You can learn how to create it here. Note: in terms of permissions, full access to code is required while all other permissions can be set to Read

    • Source Branch: The name of the branch to pull content from (default: main)

    • New Branch: The name of the branch to push content to. Leave this field empty to commit directly to the source branch

    • Content Platform: Content formats you're going to work with in Automation

  5. Set the Show Advanced checkbox if you want to make optional advanced settings:

    • Assignee: The name of the Azure DevOps user pull requests are assigned to (default: SOCPrime)

    • Tag: Add an Azure DevOps tag that will be attached to pull requests

    • Auto Merge: Choose whether you want to merge pull requests automatically, not selecting this means that you will manually manage the merge of the SOC Prime content into your repository.

    • Auto Delete Branch: Choose whether you want to automatically delete the branch after the pull request is merged (when Auto Merge is enabled)

    • Commit Message Template: Provide a template for a commit message

    • Path to Upload: Provide the path to the folder the content should be uploaded to. If no value is entered, the root folder indicated in the New Branch field is used

    • Download Path: Provide the path to the folder the content should be downloaded from. If no value is entered, the root folder indicated in the Source Branch field is used

    • File Formats: Choose file formats of the content you're going to push to your repository

  6. Click Save Changes.

MITRE ATT&CK and Log Source Coverages Updated

We've updated the MITRE ATT&CK Coverage and Log Source Coverage pages by adding the possibility to filter the data by Tenant and Data Plane. The filters only narrow down the statistics for the deployed content.

How to use the new filters:

  1. Do not apply the Tenant and Data Plane filters if you want to view all the data without any filtering. This is the default state.

  2. Select a Tenant:

    • If a Tenant is selected, the Data Plane dropdown contains only those Data Planes that belong to the selected Tenant

    • The None option corresponds to Data Planes not added to any Tenant

    • The All option means that data for all Data Planes added to a Tenant is displayed

    • The Not Associated option displays the deployment statistics only for the content where no Data Plane was selected when marking it as deployed

  3. Select a Data Plane

    • The All option displays the deployment statistics for all Data Planes in the Selected Tenant (without deployed content not related to any Data Plane)

    • If you've selected Not Associated in the Tenants dropdown, the Data Planes dropdown automatically defaults to the Not Associated value

Support for Elastic Stack ES|QL

We've added support for ES|QL Elastic Stack Queries and Rules both in Threat Detection Marketplace and Uncoder AI.

Extended Intelligence

We've extended the False Positives section of a rule's intelligence with the false positives information from the Sigma rule (if any). It is contained in the SPECIFIC (Sigma Field Based) subsection.

Presets for Microsoft Sentinel Updated

We've added a new parameter to the Preset for Microsoft Sentinel rules: Create incidents from alerts triggered by this analytics rule. It additionally expands the rule customization possibilities provided by the SOC Prime Platform. The parameter is implemented as a switch which is enabled by default.

Uncoder AI


Customizations for Reverse Translations

We've added the possibility to apply the following customizations to the reverse translations:

  • Custom Field Mapping

  • Presets

  • Filters

This ensures that this content can also be tailored to the user's infrastructure and workflows.

Support for New Formats

We've added support for new formats of already supported platforms:

  • Elastic Stack Detection Rule in TOML

  • Splunk Alert in YML

  • Microsoft Sentinel Rule in YML

Now, you can translate content in these formats into all supported languages for cross-platform translations.

Notes

  • The new formats are supported only as the source for translations

  • The new formats are not supported in Threat Detection Marketplace and thus cannot be saved to a custom repository

Attack Detective


Content Audit

We've released the fourth use case, Content Audit. It's intended to improve threat visibility by auto-mapping the user's rules and queries to MITRE ATT&CK with AI while keeping the detection code private.

The general flow is as follows:

  1. Detection content deployed in your SIEM is downloaded to the selected repo.

  2. Our private AI model analyzes the downloaded content without leaking any data. The AI suggests MITRE ATT&CK techniques potentially covered by the content.

  3. The user reviews the audit results.

  4. If needed, the user updates the MITRE ATT&CK techniques the detection content is mapped to.

To run a Content Audit, follow these steps:

  1. To start a Content Audit, select the corresponding item on the homepage or in the header navigation of Attack Detective.

  2. Set up the audit:

    1. Give the audit a name. You can keep the default name formed after the following pattern: Audit {date and time}.

    2. Select the repository to save content to.

      Note that all content in the Repository will be analyzed. We recommend choosing an empty Repository because otherwise the Content Audit results will be affected by the pre-existing content.

    3. Select the Data Plane from which the content will be pulled to the chosen Repository.

      Notes:

      • Content Audit is currently supported for the following platforms: Microsoft Sentinel, Elastic, Sumo Logic, Splunk

      • Only one Data Plane can be added to a Content Audit.

      • If Tenants are enabled for your organization, first select the Tenant to which the desired Data Plane belongs.

      • If you haven't configured a Data Plane for your SIEM, first configure it and then start the Content Audit use case again.

    4. Ensure your Data Plane connection works:

      • Connected – the Data Plane connection is OK, you can continue.

      • Disconnected – the connection is not operational. Check the error message to fix the issue and try again.

    5. Click Next.

  3. Wait until the Content Audit is finished.

  4. Review the results of the audit:

Each rule from the repository is listed in the audit results and mapped to the MITRE ATT&CK technique. The techniques are displayed as tags:

  • Gray dot – techniques in the rule metadata

  • Blue dot – techniques predicted by AI and not present in the rule metadata

  • Green dot – techniques predicted by AI and present in the rule metadata

To see all the tags, click the number with the plus icon on the right.

The spider chart on the left shows:

  • Content Coverage – the percentage of techniques in each tactic addressed by your content according to content metadata

  • Assessed by AI – the percentage of techniques in each tactic potentially addressed by your content as assessed by our AI engine

If the techniques predicted by AI are relevant, add them to the metadata of the rule:

  1. Open the rule's page in Threat Detection Marketplace by clicking the Open icon

  2. Go to the Code tab, ensure the right platform is selected, and click Open in Uncoder AI.

  3. Update the techniques and save the updated rule.

You can review the audit results right away once they're ready or later by navigating to the Audits page and choosing the relevant audit.

Search, Sorting, and Filtering

We've added the search, sorting, and filtering capabilities to the Scan Results page:

  • Search. Find specific queries by name, author, technique, actor, tool, or custom tag

  • Filters. You can filter the queries with hits by:

    • Log Sources

    • Indexes

    • Severity

    • Action Loop mark

    • Authors

  • Sorting. Available options include:

    • Default

    • Hit counts

    • Affected accounts

    • Affected assets

    • Severity

Key Bug Fixes & Improvements


  • Resolved the issue where emails from the SOC Prime Platform under certain conditions could be sent with a delay

  • Implemented minor style and layout improvements on multiple pages

  • Improved styles of some UI elements

  • Fixed a bug where under certain conditions a blank page was displayed for Uncoder AI

Did this answer your question?