Intelligent systems are increasingly part of our lives. They are helpful in different areas and help us make decisions. Therefore, there is talk of the need to develop an ethical, explainable, reliable, and transparent Artificial Intelligence. In part also because of the commitment of the European Union and the community that are emerging to establish this ethical regulatory framework and the priorities in the advancement of this field.
A professor in the area of Computer Science and Artificial Intelligence could contribute to public confidence in AI. In the first part of the interview, he tells us it is essential to have explainable algorithms and how they are technically developed.
What is Reliable and Explainable Artificial Intelligence?
Today, Artificial Intelligence is a technology that is practically in any area, helping us make decisions and obtain patterns, usually using large amounts of data.
So it is essential, for learning and this information to be helpful, that humans can understand it. The European Union has opted for this ethical, reliable, responsible, and explainable Artificial Intelligence.
How Can We Get Fair and explainable Algorithms?
There are several cases that we can investigate. Still, the general idea is that the algorithms stop being as a black box as they are today and allow some explanation, auditability, maintenance of privacy, or guarantee of sustainability. Efforts are being made on many models, and we need more research. It is a boiling field.
In my research group, we work in different areas. For example, we have developed algorithms that work in distributed environments, and what they try is to guarantee the privacy of the data of each node. So we can learn from the data of all the nodes simultaneously, but without sharing it and without sending it over the network to gather it in the cloud or any central node. What is communicated are the parameters of the algorithm.
We have also worked on the explanatory part of the algorithms. We have introduced in the evaluation metric of the algorithm the number of variables that we are using to develop the explanation. So we try to maintain the performance of complex algorithms but having an understandable algorithm above them so that a human can interpret that result.
What Profiles are Needed to Address The Explainability of AI?
We would need to be able to work in more diverse teams. this is not very common. Still, it is interesting that we learn to work with personnel from other areas, such as Sociology of Law, to address more ethical aspects or those that have to do with the development of suitable algorithms but with the developing algorithms that are good for people. They can help us integrate this change from a more social perspective.
How Does Explainable Artificial Intelligence Benefit Us?
There are areas where most of the algorithms we are using, such as deep learning ones, are pretty opaque. Very powerful from an accuracy point of view, but not very playable. I believe that the subject of the explicability of Artificial Intelligence is a particular term that perhaps for some areas is not necessary, while in others it is.
We must learn to develop more transparent algorithms, or at least that they can be auditable. From knowing if the data we are entering is partial to understanding the entire algorithm or the algorithm’s output.
Transparency can have different degrees. The explicability of the algorithms would be the highest degree, in which we would need that algorithm or its results to be easy for a person on the street to see.
It is one of the issues included in the General Data Protection and Regulation Law. It is said that a person has the right to receive an understandable explanation when they are affected by a decision of an Artificial Intelligence algorithm.
In Which Areas is Explainable AI Most Necessary?
We speak of sensitive or high-risk areas, and this has yet to be defined precisely because we do not want to put restrictions on the research and development of algorithms that can be very exact. I think the issue is not that we are going to sacrifice accuracy for the sake of explainability but rather in trying to balance.
A sensitive area can be healthy if a person is affected by a decision determining what treatment they obtain or what diagnosis they face. Also, areas that have to do with Fintech, insurance issues, loan concessions, legal issues, etc.
Probably in other areas much less sensitive for people and in which we use Artificial Intelligence every day, as it is not so important. As I said, I think that we will try to achieve a balance and try to explain the explanatory nature of Artificial Intelligence. as much as possible in susceptible areas.
Would These Measures Help Society To Accept and Incorporate AI?
It is essential to build trust. Sometimes the media or movies are broadcasting Artificial Intelligence topics that make citizens distrust them. Some of the actions that we have seen, mostly related to data privacy, have created confusion and a specific need to protect themselves. And so we see that some tools, such as Radar COVID, are not being adopted by the population, perhaps a little because of that mistrust.
Citizens must understand that Artificial Intelligence is at their service, and for that, it is essential that it be. So, we need to modernize the Public Administration and convey this idea of a much more reliable AI, and I think this is catching on in Europe little by little. And probably in other areas such as the US, where we have witnessed scandals that have to do with the transfer of data, privacy, companies that backtrack on a project, etc.
I think it is essential that we create a citizen conscience. The more educated we are about the capabilities and limitations of current Artificial Intelligence, the more we will trust technology, and I believe that we can offer better tools.
Also Read: Does A Chatbot Need Artificial Intelligence?