I am new-ish to DVC and still getting orientated. Please could I ask - are there any specific tools or processes within DVC Studio for monitoring concept and data drift? Or any plans in that area?
If you are referring to statistically comparing old and new data distributions, Studio does not have tools for this.
If you would like to re-evaluate model performance on new dataset, so that you can re-train if needed, you can submit new experiments by selecting new dataset versions from the Studio UI. You can set up your CI actions to re-run model evaluation stages, and you can generate metrics and plots as needed for comparing model performance.
Currently, Studio does not track model deployment, which means it cannot track and evaluate real-time predictions. We are working on adding model management/deployment features to Studio.
If you have suggestions around what would be useful for Studio to support, could you please create a ticket in DVC Studio support repository.