Machine learning isn’t simple. Once a model is built and tested under lab conditions, it is set out in the world. But what next? What are the ways you can monitor its performance for the protocols it was designed to? Arthur wants to help, and the company has revealed its plans for a new platform that will be mechanized to watch, observe, and examine machine learning models in production.

The company also revealed information on the $3.3 million seed round closed in August.

Adam Wenchel, CEO and Co-founder, Arthur, mentions Arthur is equivalent to performance-monitoring platform, such as DataDog or New Relic, but the primary difference here is that instead of monitoring systems, Arthur will track the performance of the machine learning models.

He also said, “We are an AI monitoring and explainability company, which means when you put your models in production, we let you monitor them to know that they’re not going off the rails, that you can explain what they’re doing, that they’re not performing badly and are not being totally biased — all of the ways models can go wrong.”

Post the machine learning models leave the controlled environments of the lab, there is a high chance of things not working the way they are expected to, and it also is difficult to keep track of the same. Wenchel added, “Models always perform well in the lab, but then you put them out in the real world, and there is often a drop-off in performance—in fact, almost always. So being able to measure and monitor that is a capability people really need.”

More in the genre

AWS has announced SageMaker Studio as the new model-monitoring tool.

IBM also introduced a model-monitoring tool for models developed on Watson.