Skip to main content

Overview

S
Written by Sergey Bayrachny

Uncoder AI is a detection engineering AI copilot and IDE that supercharges the everyday work of detection engineers and threat hunters. It combines multiple capabilities in one convenient and smart UI, so you can create and convert detection content of different types without switching between tools.

Modes


Uncoder AI has three modes, each focused on a different set of tasks. You can easily switch between the modes, keeping your work in progress:

Generate

Use our native Uncoder engine and AI to perform the following tasks:

Translate

Use our native translation engine and AI to perform the following tasks:

  • Translate across 21 native platform languages

  • Translate from Sigma into 48 native platform languages

Improve

Use our native Uncoder engine and AI to perform the following tasks:

More Features


In addition, Uncoder AI empowers full use case management life cycle:

Use of AI


  • For AI tasks, we use a customized LLama 3.3 70b with no token rates for our users

  • Input size can reach 5,000 tokens

  • Rate limits via UI:

    • Users under a Free plan can only have one AI task at a time, with the request processed in the general line

    • Users under a Solo plan can have up to 5 AI tasks at a time, with prioritized processing of requests (which reduces the waiting time)

    • For users with Enterprise-level plans, the number of simultaneous AI tasks and request priority depend on the agreed terms

Privacy and Security


Uncoder AI is dedicated to providing users with a secure and private environment. By hosting our LLM locally, avoiding interactions with third-party services, and implementing robust security measures like the LLM Firewall, we safeguard user data and ensure the highest level of privacy and security.

  • Content Generation, Translation, and Processing: All content-related operations are conducted within our secure Platform infrastructure. This infrastructure is split into separate microservices communicating via REST API over HTTPS, ensuring encrypted data transmission.

  • No Third-Party Interactions: Uncoder AI does not rely on third-party services for its core functionalities, minimizing the risk of data breaches and ensuring that user data remains within our controlled environment.

  • Secure Hosting: Our local LLM is hosted within the SOC Prime infrastructure, which is designed with security and privacy in mind.

  • LLM Firewall: To protect against potential malicious activities, the LLM Firewall analyzes all inputs (prompts) and outputs (responses) from the LLM. This proactive measure ensures the security and integrity of the AI-driven processes.

  • No User Data for LLM Training: We emphasize that no user data is utilized for our LLM training. This policy is fundamental to our commitment to user privacy and ensures that user information is not exploited for any purpose beyond the intended use within our platform.

Compliance and Standards: Our data processing and security practices are aligned with international standards and regulations, such as NIST-AI-600-1, NIST AI Risk Management Framework (AI RMF 1.0), and MITRE ATLAS™, ensuring that we meet and exceed data privacy and security expectations.

API


Under certain subscription plans, API access to Uncoder AI is provided. Learn more about Uncoder AI endpoints and their parameters here.

Support and Troubleshooting


Join our Discord community to:

  • Receive community support

  • Share your experience of using Uncoder AI

  • Ask the broader community for expert advice

  • Report bugs and suggest improvements

If you have any questions, encounter an issue, or need help, reach out to us at support@socprime.com or in the live chat on the SOC Prime Platform.

Did this answer your question?