Responsible Artificial Intelligence
November 25th, 2020 | 9:00-10:00
The main challenge that artificial intelligence research is facing nowadays is how to guarantee the development of responsible technology. And, in particular, how to guarantee that autonomy is responsible. The social fears on the actions taken by AI can only be appeased by providing ethical certification and transparency of systems. However, this is certainly not an easy task. As we very well know in the multiagent systems field, the prediction accuracy of system outcomes has limits as multiagent systems are actually examples of complex systems. And AI will be social, there will be thousands of AI systems interacting among themselves and with a multitude of humans; AI will necessarily be multiagent.
Although we cannot provide complete guarantees on outcomes, we must be able to define with accuracy what autonomous behaviour is acceptable (ethical), to provide repair methods for anomalous behaviour and to explain the rationale of AI decisions. Ideally, we should be able to guarantee responsible behaviour of individual AI systems by construction.
I understand by an ethical AI system one that is capable of deciding what are the most convenient norms, abide by them and make them evolve and adapt. The area of multiagent systems has developed a number of theoretical and practical tools that properly combined can provide a path to develop such systems, that is, provide means to build ethical-by-construction systems: agreement technologies to decide on acceptable ethical behaviour, normative frameworks to represent and reason on ethics, and electronic institutions to operationalise ethical interactions. Along my career, I have contributed with tools on these three areas. In this keynote, I will describe a methodology to support their combination that incorporates some new ideas from law, and organisational theory.
Carles Sierra is a Research Professor of the Artificial Intelligence Research Institute (IIIA-CSIC) in the area of Barcelona. He is currently the Director of the Institute. He is the current President of EurAI. He received his PhD in Computer Science from the Technical University of Barcelona (UPC) in 1989. He has been doing research on Artificial Intelligence topics since then. He has been visiting researcher at Queen Mary and Westfield College in London (1996-1997) and at the University of Technology in Sydney for extended periods between 2004 and 2012. He is also an Adjunct Professor of the Western Sydney University. He has taught postgraduate courses on different Ai topics at several Universities: Université Paris Descartes, University of Technology, Sydney, Universitat Politècnica de València, and Universitat Autònoma de Barcelona among others.
He has contributed to agent research in the areas of negotiation, argumentation-based negotiation, computational trust and reputation, team formation, and electronic institutions. These contributions have materialised in more than 300 scientific publications. His current focus of work gravitates around the use of AI techniques for Education and on social applications of AI. Also, he has served the research community of MAS as General Chair of the AAMAS conference in 2009, Program Chair in 2004, and as Editor in Chief of the Journal of Autonomous Agents and Multiagent Systems (2014-2019). Also, he served the broader AI community as local chair of IJCAI 2011 in Barcelona and as Program Chair of IJCAI 2017 in Melbourne. He has been in the editorial board of nine journals. He has served as evaluator of numerous calls and reviewer of many projects of the EU research programs. He is an EurAI Fellow and was the President of the Catalan Association of AI between 1998-2002.
Introduced by Piero Poccianti
No ontology without Ontology: the role of formal ontological analysis in AI (and beyond)
November 26th, 2020 | 9:00-10:00
Computational ontologies (with the lowercase o) play nowadays a well-recognised role in knowledge-based systems, and in information systems in general. They often have a deliberately simple structure, mainly limited to taxonomic relationships among terms. In this crude form, they may be useful for providing simple inferential services to users that already know the meaning of such terms, but are not able to account for the subtle ways language reflects people’s assumptions about the nature and structure of the world. On the other hand, Ontology (with the capital o) is a branch of philosophy whose subject matter is exactly the nature and structure of the world. In particular, Formal Ontology (which underwent a big revival in philosophy in the recent decades) studies the most general distinctions and relationships that can be used to describe the world in a rigorous, logical way. In this talk I will show how and why computational ontologies need Ontology, and Formal Ontology in particular, and I will present some of its recent results and open challenges.
Nicola Guarino graduated in electronic engineering at the University of Padua in 1978. Since the '90s he has been studying the fundamentals of knowledge representation and conceptual modeling, and has played the role of international leader in affirming a rigorous approach to ontological analysis, in a strongly interdisciplinary perspective that combines computer science, philosophy and linguistics. He is the author of numerous highly cited articles. Among the best known results of his laboratory, the OntoClean methodology and the Dolce foundational ontology. His most recent research interests concern the ontology of services and socio-technical systems and e-government. He was general chair of the Formal Ontology in Information Systems (FOIS) conference, founder and editor in-chief (with Mark Musen from the Stanford University) of Applied Ontology magazine, founder and first president of the International Association for Ontology and its Applications, member of the editorial board of the Journal of Data Semantics and of the Frontiers of Artificial Intelligence series (IOS Press). He is also a fellow of the European Coordinating Commission for Artificial Intelligence (ECCAI).
Introduced by Chiara Ghidini
Thinking Fast and Slow in AI
November 27th, 2020 | 14:00-15:00
AI systems have seen dramatic advancement in recent years, bringing many successful applications that are pervading our everyday life. However, we are still mostly seeing instances of narrow AI. Also, they are tightly linked to the availability of huge datasets and computational power. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of intelligence, for example, if we compare these AI technologies to what human beings are able to do: generalizability, robustness, explainability, causal analysis, abstraction, common sense reasoning, ethics reasoning, as well as a complex and seamless integration of learning and reasoning supported by both implicit and explicit knowledge. We argue that a better comprehension regarding of how humans have, and have evolved to obtain, these advanced capabilities can inspire innovative ways to imbue AI systems with these competencies. To this end, we propose to study and exploit cognitive theories of human reasoning and decision making (with special focus on Kahneman’s theory of thinking fast and slow) as a source of inspiration for the causal source of these capabilities, that help us raise the fundamental research questions to be considered when trying to provide AI with desired dimensions of human intelligence that are currently lacking.
Francesca Rossi is an IBM fellow and the
IBM AI Ethics Global Leader. She works
at the T.J. Watson IBM Research Lab,
Yorktown Heights, New York.
Prior to joining IBM, she has been a
professor of computer science at the
University of Padova, Italy.
Francesca’s research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. On these topics, she has published over 200 scientific articles in journals and conference proceedings, and as book chapters.
She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI), an executive councillor of AAAI, and the Editor in Chief of the Journal of AI Research.
She is a member of the scientific advisory board of the Future of Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence (Cambridge, UK). She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners. She has been a member of the European Commission High Level Expert Group on AI and the general chair of the AAAI 2020 conference. She will be the AAAI president in 2022-2024.
At IBM, she is the PI in exploratory research projects, also in collaboration with MIT and RPI, on topics that range from embedding ethical principles into AI decision making to exploiting cognitive theories of human decision making to define more flexible, robust, and general AI systems. She also co-leads the internal IBM AI Ethics board, that coordinates the governance of AI ethics within the whole company.
Introduced by Stefania Bandini