Skip to main content

Rafael Pellon, MEF LatAm advisor and partner at Pellon de Lima Advogados shares insights from a recent webinar where Artificial Intelligence experts discussed the changing regulatory landscape around AI and what governments and regulators need to do to keep ahead, or even to keep up, with the technology.

During such extraordinary times in the midst of a pandemic, consumers have accelerated their digital lives, eagerly adopting digital tools as never before, allowing us to survive, even if stranded from public places. Artificial intelligence is surging and proving to be helpful, even if we’re with a turve eye to notice all of its impacts right now.

That was the background for a webinar on the regulation of AI, following up from the discussions held in Barcelona in the beginning of the global pandemic and on meetings afterwards. If AI needs some sort of regulation or limits, what it would be?

  • Andrew Bud, CEO of iProov, MEF Global Chair – Moderator
  • Serafino Abate, MEF Regulatory advisor
  • Alexandre Del Rey, CEO and founder, International Association of Artificial Intelligence
  • And myself, Rafael Pellon, Partner, Pellon de Lima Advogados

According to Serafino Abate, considering the lack of regulation AI is mainly driven by self-regulation scenario, sharing the example of the Oversight Board created by Facebook, and the actions of company such as IBM, which decide not to offer, develop or research facial recognition technology to law enforcement considering the big backlash the way it’s been used, particularly in the USA, establishing then a red line.

The conversation circled around the perceived difficulty to regulate something still so fluid as AI, that can be used for a multitude of goals and is massively favorable to our flawed species when used correctly. Most of the initiatives addressing such a challenge try to map the uses of it to discover if and when something went wrong.

Alexander Del Rey emphasized that regulation also needs to determine the risk levels in the information provided through AI, which are acceptable levels of risks and what are acceptable rules for public applications like insurance, health, security, education, as an example.

Our speakers shared cases, mostly on problems with facial recognition and its biases, the lack of minimal guidelines for experimenting with the power of AI and the fact that, by the end of the day, we’re all on the petri dish for the testing of new applications, not mattering if someone is living in developed or under developed regions of the globe.

Given such strive for the deployment of AI applications, what should be then its red lines, the hard limits that we, as a society, should have imposed to avoid the real dystopian feelings that somehow arise on the back of our necks? All speakers agreed that military, health, finance and public security should all have limited uses of AI on its industries, given the potential to critically harm our social fabric. As shared by Andrew Bud, we are getting at a time where regulation will need to involve the full industry in a timely way.

Does this mean that regulation should approach those industries on vertical guidelines? Not so much. Mr. Bud brought an important question regarding the accountability on the AI and machine learning operations, based on the premise thar AI and the regulations challenges are related to any system which can make a complex, decision when those decisions will increasingly affect people’s lives.

Therefore, considering all the different players involved in the process of enabling systems decision throughout machine learning process, one must question how to discuss accountability on these decisions and the regulation of this accountability per se.

In this sense, Mr. Abate joined my opinion that it must be a shared responsibility among the different players, but he also emphasize that accountability must be separate into different components, considering that there are different process, and different stages of learning of the machine. Hence, it should be an end to end accountability, but also an accountability for the different stages of the AI process.

Speakers agreed that vertical regulation is indeed necessary. By vertical regulation we refer to regulation for a specific sector or market  rather than a generic approach valid to all industries or services.  This verticals laws should  define basic guidelines such as the right to transparency of financial or credit decisions taken by an algorithm, with human review when needed, or the prohibition to employ automatic targeting on military applications, given how flawed and prone to error object recognition would be on stressful battle conditions. Killer robots is a loud no for now.

On health applications, the biggest issues are the critical data being amassed by private companies and how that will be used later on, a paradox mentioned by scholars on the challenge of balancing large amounts of data to provide good services versus the lack of decay of digital data, which can prove harmful in the long term. With the amount of health data collected by each and everyone of us with our daily devices this is concerning, at best, and the best regulation around it for now seems to be EU’s GDPR, which is far from being the global standard nowadays. Professor Harari sums up its risks on the first 2 minutes of this recent interview. Last but not least, public security and facial recognition is a headline with the recent news that the behemoths of the tech industry will stop allowing its AI tools to be used by public security forces in the US in the wake of the ultimate protests against racism and other bias. There are other ways of employing AI on public security in chilling ways, though, as this piece from the NYT uncovers.

By the end of the webinar, it was becoming clear that we might also need horizontal regulation for the protection of basic human rights, before it all becomes too dystopian. It is time.

Rafael Pellon

Policy & Initatives, LatAm, MEF Board Member

  

MEF