
Distributional automates analysis of enriched production AI logs to surface interesting signals of AI product behavior—the interrelation of inputs, prompts, context, tools, model, and response. This product is composed of three components: platform, pipeline, and workflow. The pipeline powers this workflow with automation of data analysis at scale. The workflow empowers you to leverage insights to continuously improve your AI products. And the platform enables you to deploy, administer, scale, and integrate our product seamlessly with your stack.
We built our platform with as wide a variety of potential users in mind as possible with the goal of making it seamless to use us in your environment and with your stack. In this post, we’ll talk through a few of our design choices that led to a robust, flexible, and scalable platform.

Our design partners have some of the most sensitive policies around data privacy and security, so we designed our platform to be available in your environment of choice—VPC, on premises, or wherever you store production AI logs – via Terraform, Helm, or directly on your Kubernetes cluster. No data ever leaves your environment.
Our product is available for free on the GitHub Container Registry governed by this license. Analytics is a critical component of the production AI stack, so we are making sure you can always access your version of our product for free. This de-risks lock in and puts you in control over your stack. It also makes it easier for your team to get started.
When you deploy our full service, our footprint has a minimal set of dependencies to run. This includes Redis, Postgres, and Kubernetes, as well as an object store and load balancer. Networking requirements are minimal as well, requiring access to only a few URLs to stand up the service—including internal artifacts registry, object store, and OIDC.
We expect all of our customers to have existing AI product infrastructure, and designed our product to fit seamlessly within this context. To enable this, Distributional comes with a few connectors. Our data connections support three methods of data ingestion: OTEL trace ingestion, SDK log ingestion, and SQL integration ingestion. Our semantic convention makes it easy to pass these production AI logs to us in a way that makes it easy for the Distributional platform to analyze and produce insights on your AI product. Our model connections enable you to hit your preferred LLM of choice for evaluation and analysis, or to integrate our recommended endpoint. Finally, our notification connections enable you to publish our daily insights and alerts to Slack, email, PagerDuty, or your channel of choice.
Contrary to monitoring tools, our product is designed for efficient and scalable daily batch analysis of all of your production AI logs. We enable this workflow with a pipeline supported with architecture that includes a scheduler for job management, a load balancer to manage traffic, and scalable endpoints for LLM as judge and other computationally intensive tasks.
Distributional is designed with a complete set of enterprise namespaces, permissions, and authorization to ensure your team can control who has access to what data. Distributional has fully partitioned namespaces so the administrator can separate access to data and analytics by AI product team. Our organization administration also allows for naming specific user roles and management of these permissions. Along with authentication via OIDC with personal access tokens and configuration, we enforce authorization via API to access databases or storage.
This platform is designed to fit with your existing stack. It is deployed in your environment and no data leaves, so is fully secure. It comes with a full set of enterprise features for administration, authentication, and data security. And it is built to scale with minimal engineering overhead.
Distributional’s full service is open and free to use. Try it today to experience these enterprise features yourself. We are also always happy to learn more about your use case and enterprise needs, so reach out to nick-dbnl@distributional.com with any questions.

.png)