📝 Trustwise Changelog
Below is a summary of major changes, improvements, and fixes. Each version section is referenceable via anchor links.
Version 3.3.0 (2025-04-08)
Data Updates
- Together AI Embedding Model Support
- Added support for Together AI embedding models, expanding the list of supported embedding providers.
 
Features
- 
Custom Reranker Model Registration - Users can now register most reranker models available from supported providers (Cohere, Together AI, Azure).
 
- 
Reranker Model Support in Scans - Users can now apply registered reranker models in Optimize:ai scans to test and compare reranker performance for their specific use cases.
 
- 
Tokenizer Upload - Users can now upload custom tokenizer files. This unlocks compatibility with LLMs that previously lacked open tokenizer access.
 
- 
Cost and Carbon Control Maps - Introduced new Cost and Carbon maps that help users configure cost and carbon maps for their scans based on their own cost policies.
 
- 
Airgapped Support - Quickstart is now compatible with airgapped environments, allowing for fully offline deployments.
 
- 
Custom LLM Gateway Support (beta) - Added support for registering and routing through custom LLM gateways, enabling advanced enterprise configurations.
 
API Updates
- Cost Endpoint Release
- API endpoint that estimates usage costs based on the user's LLM and/or reranker configuration.
 
Bug Fixes
- Various stability improvements and minor UI fixes across model registration and scan interfaces.
Version 3.2.0 (2025-01-25)
Data Updates
- 
Expanded Embedding Model Support 
 Users can now register their own embedding models from supported providers, including OpenAI, Hugging Face, and Azure. Previously, users were limited to a predefined list of embedding models.
- 
Azure LLM & Embedding Model Support 
 Added support for Azure LLMs and embedding models in addition to previously available providers.
Features
- 
Custom Embedding Model Registration - Users can now register most embedding models available with our supported providers (OpenAI, Hugging Face, Azure).
- This removes previous limitations on embedding model selection, allowing for greater flexibility and customization.
 
- 
Asynchronous Document Processing - Document uploads are now processed asynchronously, allowing users to leave the page without interrupting processing.
- A new status button provides real-time updates on the progress of uploaded documents, including success and error states.
 
- 
Support for Large Scans - The previous limitation of 3 queries at a time has been removed. Users can now run large scans without query limitations.
 
- 
Expanded Storage Provider Support - Optimize:ai now supports additional storage providers, including MinIO, offering greater flexibility for storage solutions.
 
- 
OAuth2 Support - Added Oauth2 support for standardized authentication and seemless integration.
 
API Updates
- Carbon Endpoint for SCI Calculations
- Released Carbon endpoint for SCI calculations