Rafael Pellon, MEF LatAm advisor and partner at Pellon de Lima Advogados shares insights into the role of artificial intelligence in the evolution of mobile technology, based on discussions held in Barcelona earlier this year for the unMWC event, where the debate explored the legal and moral implications of new developments in AI.
The discussions held in the emptied Barcelona were similar to what’s being discussed all over the world right now. We all know AI needs to be tackled and controlled somehow, before its uses convert into disturbed, abusive practices from all sorts of industries, with healthcare, finance, retail and military applications topping the list of industry’s where it could become awry.
After all, no one wants to have a medical treatment denied because the right medicine wasn’t bought at the nearest pharmacy. No one wants financial credit limited considering how well your phone battery is kept charged on a daily basis (and this is one of the micro information topics considered in credit companies in China), as likely not everyone wants to have its shop history raised every time you enter a store. On the military applications, let’s leave it with Jim Cameron’s Terminator movie franchise for now (#illbeback).
However, the bottom line is that AI is revered as this new, shiny tool for humanity, with the ability to help us on unlimited and serious issues. Currently, there are many developers building AI applications for the fight against COVID-19, and no one is unnerved by this, given its obvious urgency.
Hence, any regulation of sorts for AI seems to have been rushed, or too generic. There are studies popping in almost every month with frameworks and methods and some logic on how to approach such an Herculean task, but the best ones converge on similar base rules: don’t discriminate, don’t use it to promote harm or hate, listen to the experts on the field where the AI application is being implemented, test, fail quickly and test again, but don’t deploy it all at once, after all this is something serious.
We’re still on the beginning of understanding how this whole new world of AI will work. Like fire, oil, space travel, the telephone and the internet before it, we’re going to fail at some point. Regulation will always be late, but that is our approach to innovation, and it seems to be working, sort of.
We will be better off if there is a focus on teaming up with developers, industry experts, case studies and the ecosystem as a whole. Regulators will need to rely on self-regulation or risk being not just late, but badly wrong on this new season of the global innovation race.
Start your engines.