Distributional is designed with enterprise security needs in mind. Our goal has always been to create a testing platform that is secure and compliant—one that protects user privacy and adheres to regulatory requirements without compromise—while still enabling teams to gain insights from their data.
To achieve this, we’ve architected our product to be private and secure by design. There are several aspects of our product that make it possible for us to work with the most security conscious oriented teams in the world—here are a few of the key elements.
Distributional’s platform can be seamlessly deployed within a customer’s environment—whether on-premises or within a VPC—ensuring they maintain full control over data access and security. For teams that prefer even greater autonomy, Distributional also offers a lightweight version of the platform that can be run locally.
To further ensure that sensitive information remains completely under a customer’s control,
Distributional is designed with a strict no call-home policy. Once installed in a customer’s environment, it operates entirely within their secure infrastructure, with no data being sent or received outside of their system, aside from automated fully-configurable alerting.
Distributional is deployed with a namespace architecture, which provides secure and flexible access management. Namespaces serve as isolated partitions within the platform, allowing administrators to control who can access the specific Distributional projects aligned with individual AI apps and any relevant data.
Distributional uses LLM-as-Judge to evaluate whether AI outputs meet the intended task. Teams can connect their own securely hosted model or use Distributional’s default. This approach ensures that even judgment and evaluation maintain the same high standard of privacy and security as core applications. Data never leaves the secure environment, and all models operate under existing security protocols.
Since most customers already maintain their AI production logs in a centralized, secure location, Distributional’s platform is designed to integrate directly with these data stores and automatically fetch the AI production logs. Only the data explicitly approved will be accessed or processed by Distributional. Customers retain full control over which logs leave their storage environment, ensuring that sensitive information remains secure and compliant with their data protection policies.
Distributional offers a robust set of standardized APIs designed for secure, reliable, and scalable integration. All data exchanged with the platform is encrypted using HTTPS, and the APIs are optimized for high-throughput, low-latency operations suitable for production environments.
Distributional’s platform takes a unique approach to testing by extracting measurable metrics derived from text data using advanced analysis, rather than focusing on just the raw text inputs and outputs. This method offers a critical advantage—ensuring these metrics cannot be used to reconstruct the original text. This allows any sensitive information in the raw text to remain protected. Customers have the flexibility to only rely on these derived metrics for testing, rather than uploading text data. This ensures teams are able to still garner actionable insights, without the risk of exposing sensitive user information.
Distributional’s platform is designed to give customers complete control and optionality over their data and integrations. The platform can adapt to specific security, privacy, and regulatory needs, while still providing the quality and insights necessary to test AI applications.
Learn more about Distributional’s security features by downloading the full tech paper.