Distributional’s adaptive testing platform is designed to support the scale and efficient processing necessary for AI teams to continuously define, understand, and improve AI application behavior. By implementing continuous adaptive testing, teams are able to quantifiably detect and understand when there are deviations from the desired behavior of their AI applications. This in turn helps enterprises bridge the AI Confidence Gap, so they can productize higher value applications, confidently keep them in production, and achieve the gains that GenAI promises.
To help teams quickstart today and scale tomorrow, we designed Distributional’s platform to easily integrate within a customer’s environment. In a previous blog, we introduced Distributional’s platform architecture. In this article, we’ll cover how Distributional integrates within a customer’s existing environment.
The Distributional Platform is designed to sit within a customer’s existing data infrastructure, rather than be a separate silo to manage and sync. It can easily be deployed in the existing cloud environment and integrate with the existing storage system. This makes it easy to fit seamlessly within any AI platform. By leveraging industry-standard components, it also ensures that customers can take advantage of the managed cloud service versions of each for even easier management.
In addition to being fairly agnostic with regard to where the data lives, the platform is also agnostic on what the data looks like. Any existing logs, traces, or other evaluation metrics can be used as inputs. The more data provided, the more context the platform has to develop a comprehensive understanding of behavior for testing.
Customers can then use their preferred orchestration tool to define the ingest schedule for how often new data is sent to Distributional. For example, a customer could schedule daily updates through Airflow to ensure their models haven't drifted by feeding a fixed set of inputs into their AI app to generate a dataset of examples to be uploaded to Distributional as a run and tested for change.
For authentication, the Distributional platform uses OpenID Connect (OIDC), again ensuring seamless integration with any preferred identity providers. In addition to authentication, the platform also has a native permission model using role-based access controls for further security.
Overall, the SDK is designed for broad extensibility. It’s built in Python, making it flexible for customers to integrate with other preferred tools or existing processes. Distributional is also continuing to expand the native integrations built into the platform to make this even more seamless for customers. This allows customers to seamlessly integrate Distributional with their AI platforms, while still allowing for portability and adaptability as these platforms continue to mature.
Ultimately, this results in a platform that is easy to deploy and manage, integrates within an existing environment and preferred tooling, and is built for enterprise scale and secure usage.
If you’re interested in learning more about Distributional’s platform and architecture, check out the full tech paper. If you’re interested in trying out Distributional’s adaptive testing platform, reach out to the team and we’d be happy to get you set up.