AI Regulation and the Proliferation of Third-party AI Tools
News on the EU AI Regulation, findings from MIT Sloan Review and the latest updates from nannyML.
This is Santiago from nannyML! It’s been a while since the last time we’ve emailed you. We brought back the Post Deployment Data Science newsletter. So I wanted to let you know what you can expect from me:
Every two weeks, I will research the latest news and updates about Post Deployment Data Science. Sometimes I’ll write blog posts about it. But every time, I’ll cover the main 1-2 topics in this newsletter.
So let’s get to it!
News in the post-deployment data science world
Just like real-world data, the post-deployment data science world never stops changing. This month two big pieces of content caught our eyes 👀.
The EU AI Act
In June, the European Parliament adopted the AI Act, with 499 votes in favor, 28 against, and 93 abstentions. Meaning that it is likely that the European AI Regulation will be a reality.
The Act will impose new rules on how we do data science, especially post-deployment data science. In a nutshell, it will be mandatory for all high-risk AI applications to implement a monitoring plan and monitoring system. Companies operating high-risk AI systems will be obligated to report in a technical documentation any changes to the AI system and its performance through the life cycle of the AI application.
We wrote a whole blog post summarising the most important parts of the EU AI Act. Check it out to learn how this might affect the day-to-day job of a data scientist.
MITSloan Management Review - Risks and Failures of AI Systems
This week the MIT Sloan Management Review published its findings on Building Robust RAI Programs as Third-Party AI Tools Proliferate.

On it, they speak about how risks and failures of AI systems have become more common than ever, mainly because of the high popularity of third-party AI tools. In the research, they surveyed 1,240 C-suite executives and found that:
“More than half (53%) of the organizations rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own.”
“More than half (55%) of all AI-related failures stem from third-party AI tools, leaving organizations that use them vulnerable to unmitigated risks.”
To check out the full research
NannyML Latest News
In the last 90 days, more than 20k people read our blog.
We went viral on HackerNews and in the r/MachineLearning subreddit. 🔥
We launch two new features: 🚀
Data quality checks: We added the first two data quality metrics to track over time: the number of missing values and the number of unseen values.
Summary statistics calculators: With these calculators, you can track the evolution of summary statistics over time.
We are working on new features for monitoring time-series models. We plan to announce this soon. 👀
And last but not least, we are building the new standard for Monitoring ML Models. Everything you’ll need to do post-deployment data science, from the cloud, integrated with your systems.
Until the next one 👋