The regulatory requirements that are likely to be imposed upon AI in the nearest future are now widely discussed by governmental and international bodies. But what does it mean for the technology firms? Will the regulations bring positive changes or stifle further developments in this area? For now, it is difficult to say for sure.
On the one hand, regulation can stimulate technology innovation as firms will get additional motivation to create more robust, resilient and competitive solutions. AI-related documents issued by the US and the EU draw more attention to the risks associated with the adoption of the technology, which will naturally result in new research exploring the ways to address those risks. Another benefit of standardization can come from firms putting more effort into ensuring both data and process quality.
On the other hand, increased regulation of AI is likely to complicate the business-as-usual processes in the AI-focused firms who can be required to store all the data used to create and fine-tune the underlying models for substantial periods of time. With the additional challenges of data sensitivity issues and antitrust policies, that will require vendors to allocate more resources to storing, processing, and, in some cases, encrypting the data.
Both points, though, are closely correlated with the necessity of independent verification and validation of data and training procedures behind an AI solution.