Hey DVC crew!
I have been rocking DVC for my machine learning experiments, but I’m stuck on the best way to track metrics across a bunch of different models. Here’s the situation: I have several models trained with various algorithms and hyperparameters, and I want to see how their performance (accuracy, precision, recall, etc.) stacks up over time.
Wondering what the DVC experts recommend here. Should I create separate metrics files for each model version, versioned with DVC of course, or is there a more streamlined way to manage this within DVC itself, like aggregating and comparing the metrics directly?
I also check this : https://discuss.dvc.org/t/using-dvc-for-non-machine-learngenaing-models/519 But I have not found any solution.
Any tips or tricks you can share would be awesome!
Thanks in advance!