The relentless stream of new language models hitting the market presents both opportunities and challenges for enterprises leveraging AI. Technical teams face integration hurdles while compliance officers worry about the governance implications of model changes. At DiligentIQ, where private equity clients leverage our AI due diligence platform to assist with critical analysis of companies they are targeting for acquisition, we've structured our architecture to shield clients from these disruptions while still delivering the benefits of emerging AI capabilities.
Model-Agnostic Architecture
From day one, we designed our platform to be model-agnostic, as it was unclear who the winners would be or even what winning would look like in the future. Locking into a single model, or even a single provider, would’ve introduced unnecessary risk, slowed our ability to experiment with new features and made future integration more painful.
To avoid that, we built a robust abstraction layer, think of it as a universal translator, that sits between our application and any model provider. This allows our engineers to work with LLMs through a consistent interface, without worrying about provider-specific quirks or implementation details. It also enables us to benefit from unique features from the model provider (e.g., enabling reasoning, temperature setting or specialized formatting).
Flexible Configuration
We manage general instructions for the models (“system messages”) outside our application code. This allows our team to experiment with prompting strategies without deploying new code. These messages are modular by design, with components that can be overridden at multiple levels (user, firm, specific model, or model provider). This gives us granular control to tailor behavior based on each model’s strengths or a specific client’s preferences.
Standardized Evaluation Framework
New model releases kick off a structured evaluation process that blends automation with expert review. Our internal framework scores model performance across key due diligence dimensions using both automated tests and subjective human feedback. While there are many industry benchmarks that are helpful, they are rarely focused on real-world business scenarios involving complex documents. Our hybrid approach helps us quickly assess not only whether a model should be adopted, but where it fits best within our platform.
Granular Model Availability
Because no single model excels at everything, our platform allows each feature to leverage the model best suited for the job. Every feature offers a curated set of models, along with a recommended default, to balance speed and accuracy based on the task.
For example, our Bulk Query feature (which allows users to quickly ask the same question of many documents in parallel) often favors models that are optimized for speed, while document analysis tasks benefit models with strong reasoning capabilities.
With our deep understanding of the strengths of each model, we're working toward model auto-selection going forward while still allowing advanced users to make their own selections as desired.
Client-Specific Model Deployment
DiligentIQ’s single-tenant architecture makes it possible for us to control model availability on a per-client basis. This has proven helpful to firms as they establish AI teams to support the wider AI ecosystem across an enterprise. This configuration allows us to enable or disable specific models for individual clients at any time, in seconds, without code changes.
This flexibility is critical when new models show promise for specific use cases but aren't fully battle-tested. We can gradually roll out access, starting with clients whose workflows would benefit most from specific improvements that newer models offer.
While we only deploy models that meet our internal trust and security standards, we also respect each client’s right to set their own boundaries. Clients can whitelist or block specific models or providers based on their own risk assessments.
In Summary
The LLM ecosystem is evolving at breakneck speed, but with the right architecture, that pace becomes a competitive advantage, not a liability. By staying model-agnostic, abstracting model interactions, and decoupling configuration from code, we’ve built a platform that adapts quickly without disruption. Our evaluation framework ensures that models meet our expectations for being commercially ready for complex private markets activity.
As a result, our clients get the latest in AI performance and capability, on their terms, and at their pace.
© 2025 DiligentIQ