Use a model inside PerceptiLabs (Batch inference)
R
Robert Lundberg
Hey all!
With the recent release of FastAPI and Gradio, do you still feel that this is needed more than just a "nice to have"? :)
Julian Moore
Robert Lundberg: Well, since PL has some nice metric visualisations, though it still leaves it as "nice to have" it would be very nice to be able to do this to compare model performance on arbitrary datasets for performance characterisations without having to re-code externally.
R
Robert Lundberg
Julian Moore: That makes sense, thanks!
R
Robert Lundberg
open