Skip to main content

Stein Hansen, founder and CEO of OpenSky Consulting, explores the state of artificial intelligence and the companies and stakeholders coming to grips with its countless applications. How will AI be used to enhance our daily lives and what risks does it pose when mishandled or abused?

Every now and then, new buzz words and acronyms become popular in the tech area – and these days Artificial Intelligence or AI is one of the hottest things around. In fact, AI has been around for decades since it was first termed in the mid-50s. After a long Cinderella sleep in the public domain, however, AI has had a new awakening in the last decade, mainly due to significantly increased computing power and access to massive amount of data.

AI exists (or will exist) in various forms, some of which are still science fiction, like “general AI” or “strong AI” or whatever you may want to call it, i.e. simulating human intelligence properly in a complex scenario in an autonomous way across problem domains. Most current AI applications are generally limited to solving specific problems in specific domains, e.g. like playing chess – or doing voice, text or picture recognition etc through some form of Machine Learning – i.e. where a machine learns through complex learning algorithms analyzing massive amounts of data through correlation, classification etc.

Today we see that “everyone” wants to use and capitalize on AI. We already see various kinds of consumer products claiming to have some form of AI in them – and as a consumer, typical applications we see today are chatbots or smart assistants and intelligent home speakers like Google Home etc, recognizing text or voice and correlating this with huge databases of information. We also see healthcare products, home appliances and much more – and at CES 2020 (which took place just before the global corona outbreak) there was (again this year) a lot of announcements around AI. As consumers we are also already used to consumer facing chatbots for customer service – and for the world’s most popular consumer device, the mobile communications handset, some claim that the next wave of user interaction is through voice, thus potentially dropping e.g. keypads. Autonomous vehicles are also one of the great hypes these days – and will clearly need AI.

In the B2B area, e.g. in an area close to my heart: mobile network planning and operation with 5G technology, AI seems unavoidable due to the complexity and dynamic nature of the challenge – and providers of network equipment are all integrating AI into their products. In a 5G world, we know that the network architecture is complex – and there is a chance that there will be not one total management system for the whole network, but “sub-systems” that manage parts of the network. When using AI in network configuration and management, operators need to exercise great care and maintain control. If parts of the network “manage themselves”, it is critical that the different parts of the network do not manage themselves in conflict, creating instability or self-generated Denial-Of-Service. It is thus important that there also is some holistic AI with a total network perspective.

  Over the last year of two, we have seen a geopolitical battle between the US and “everyone”, most notably between the US and China – but generally across the world as well. The official concern has been cyber security, but in the larger picture it is really also about technological leadership and digital sovereignty

Although many of us can see the potential benefits of AI as users or customers (and yes: there are also threats and risks – see below), something not often talked about is the ecosystem battle involved. An expression often used these days is “Data is the new oil”. This is very true – and companies are widely positioning themselves in the area, the most prominent one being Google, that collects data everywhere in huge data centres and uses this for its search engine and for collecting user data. User data are collected (anonymously or not) by many – and a large number of companies know much more about you than you know – and AI is often used to capitalize on it, providing consumer products or consumer services.

Not only Google, but all OTTs or big tech companies like Apple, Amazon, Facebook, Microsoft etc have clearly planned a future heavily based on AI. Google announced its “AI first” strategy already 3 years ago and aims to dominate AI (see also this article) – and we see them all competing on this arena e.g. in providing smart assistants like Google Assistant, Amazon Alexa, Apple Siri, Samsung Bixby, Huawei Assistant etc. On the network side, we also see telco network suppliers like e.g. Ericsson offering AI across their network solutions. This is nice – and may result in very optimized networks, however, it also potentially pre-empts interoperability with equipment from other network equipment vendors.

Over the last year of two, we have seen a geopolitical battle between the US and “everyone”, most notably between the US and China – but generally across the world as well. The official concern has been cyber security, but in the larger picture it is really also about technological leadership and digital sovereignty. We have seen US bans on Huawei and ZTE, we have seen Russia wanting to establish its own internet – and we have seen many political leaders wanting to “take the lead on 5G”, not only in the US, but also in the EU, China, Japan and more (the latter even on 6G – refer also my earlier article). We also see the EU and various European countries working on regulatory measures against the (generally US-based) globally dominating OTTs with “digital taxes”, GDPR, competition fines etc – and it seems likely that AI is the next geopolitical battle. While the leading regions of the world on AI are the US and potentially also China and parts of Asia, the EU has developed their strategy for AI – and even my own country Norway has a national AI strategy in place. The aim in general is to be “leading” on this “new” technology AI. See also the latest development on geopolitics from China on 5G, AI and more (refer also the same earlier article).

Now to some matters of concern around AI: These are the same today as there were decades ago – and relate to whether we trust decisions based on AI or not – or, as in some former science-fiction movies, if the machines will take over the world. For pure technical applications like 5G network planning or operation AI could in the worst-case result in network misconfiguration, non-optimum performance or even network outage (which is serious enough for critical infrastructure) – should AI make wrong decisions. Then the challenge would be how to correct it, as the reason for why it turned out wrong might be unknown – if the AI is an autonomous “black box”.

On top of the privacy issues referred above, for applications with direct consumer impact it could be serious from a personal perspective as well. How can we know that AI produces something that is objective, non-biased and non-discriminating – e.g. were you refused based on your gender, race, sexual or political orientation? Of course, every decision made by humans today is more or less biased as well. When this is automated, however, how can we know or have transparency on how decisions are made? … and where is the bias? – in the data forming the basis for the AI – or in the algorithms used? We know that how search engines work is that those who pay most are put on top of the search list – and you could also imagine in a wider context that not only individuals or various types of lobbyists but also governments could mis-use AI to push their political agenda, creating fake news etc. In any case, it should not matter if a decision is made by humans or machines. Someone should be accountable or liable. AI cannot be a black box. Decisions need to be auditable and correctable. “Ethical AI” is thus an important topic today – and, as an example, the EU has created an “Ethical AI guideline”. The EU has a strong wish to issue AI regulations most likely in 2020 to cover the issue – and issued a White Paper preparing for it in Feb 2020.

  Over the last year of two, we have seen a geopolitical battle between the US and “everyone”, most notably between the US and China – but generally across the world as well. The official concern has been cyber security, but in the larger picture it is really also about technological leadership and digital sovereignty

The question, of course, then would be if the regulations will work in support of or against the objectives of technology leadership at the end – and if regulation will stifle innovation or not – which is particularly important in the area of AI which in many ways is in it’s infancy. That remains to be seen, but it is well-known that the usual approaches in Europe and in the US are very different in the regulatory area. In general, the US tends to be more hands-off while Europe tends to go somewhat further. See also some speculations from earlier this year from Access Partnership – covering e.g. “horizontal” versus “vertical” AI regulation, i.e. general versus sector-specific regulation. What seems to be clear, however, is that AI needs to be regulated somehow – and even Google’s boss Sundar Pichai recently called for AI regulation. See also a GSMAi commentary on it. In my view, this issue does not seem to become clear shortly! It will probably (and hopefully) be an evolving topic – but let us make sure we don’t kill it in the cradle!

One or two years ago, there was some press around the privacy and social acceptance around consumer facing AI relating to smart speakers – and Apple faced some scrutiny around it. The concerns were that “always-on listening devices could soon be everywhere”, with concerns like: Do people like to be listened to at all times? … and potentially being exposed to surveillance? … and by whom? Although the situation has calmed down today, the concern remains. On the other hand, a study from two years ago by Oracle and Future Workplace also found that 93% of people would trust orders from a robot at work. In the same study, it was also found that consumers do not trust autonomous driving technology (and personally: nether would I – yet – maybe never?). Finally, on a personal note, I hate being exposed to chatbots, be it on the web or in a call centre. I normally want to talk to a person – and I want to be able to have someone to blame or to escalate to.

The final issue I will comment on in this article (it is probably too long already) is how AI may work for you or against you in terms of fraud and security – seen mostly in relation to being a mobile network operator. AI will be an important part of the 5G era – and should from a security perspective be considered in three areas: 1) Risks in using of AI for optimization, e.g. in setting up and managing the network (refer comments above), 2) use of AI to detect abnormal behaviour, e.g. detecting fraud or attacks on the network – and 3) use of AI to attack vulnerabilities (as a hacker). It is unavoidable that AI will be used against us for fraud and security, but AI will probably also be needed to fight it!

AI relies on collecting a lot of real data, not dummy data – with the purpose of automatic decision making. This will obviously need clear privacy considerations when user data is collected – and there are obvious security and privacy threats if such data is intercepted by an eavesdropper. There are also obvious threats connected to the integrity of data. The learning process could be manipulated, e.g. by inserting false or malicious data (“model poisoning”) with potentially dire consequences. It might therefore be very necessary to have a clear “supervised learning” – and definitively not something simply left up to algorithms and processes defined by vendors, without any controls.

The above post originally appeared on the OpenSky Website and is republished here with kind permission

Stein Hansen

Founder and CEO, OpenSky Consulting

 

MEF