[Turing-Southampton] Call for papers for WebSci'20 Workshop: Explanations for AI: Computable or Not?
Susan Davies
sdd1 at soton.ac.uk
Fri Apr 3 13:10:13 BST 2020
***apologies if you receive this more than once**
Call for papers for WebSci'20 Workshop: Explanations for AI: Computable or Not?
https://git.soton.ac.uk/nt1n16/exAI2020
CFP: https://easychair.org/cfp/exAI2020<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Feasychair.org%2Fcfp%2FexAI2020&data=01%7C01%7C%7C44edeb29099f47e4988708d7d7c7f72b%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=JJoP%2FenQKog%2BoF%2B6XxWDId%2BQqvP6SknQBHIhZbY2XEU%3D&reserved=0>
This workshop (hosted by the WebSci'20<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwebsci20.webscience.org%2F&data=01%7C01%7C%7C44edeb29099f47e4988708d7d7c7f72b%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=vrVWz0T3H%2BtFWNH9bYijrTEbd6JQNx6Z6qT0gw%2BPY7Q%3D&reserved=0> conference) will focus on socially-sensitive decisions made or assisted by AI systems which often involve more complex (e.g. machine learning) and opaque forms (also referred to as black-box algorithms) of underlying decision-making processes. The aim is to stimulate a lively debate on whether explanations for AI are computable or not by bringing together researchers, practitioners and representatives of AI (or AI-assisted) decision-making systems. We invite short position papers of no more than three pages on topics such as:
* Critiques and advantages of explanations for AI, including the extent in which explanations can or should be made computable.
* Use cases, scenarios and/or practical experience of explanations for AI, such as: the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making.
* Legal requirements for explanations, and the extent in which data ethics may drive explanations for AI.
* Reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions.
* Lessons from other related areas, such as challenges faced in the areas of computable contracts and compliance automation.
Successful authors will have the opportunity to showcase their work in the form of posters on July 8 at the joint conference and workshop poster reception. Selected opinion pieces will be invited to be published in the next issue of Computer Law & Security Review<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.journals.elsevier.com%2Fcomputer-law-and-security-review&data=01%7C01%7C%7C44edeb29099f47e4988708d7d7c7f72b%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=zq6v%2BtPDTi1Jz96i7uJ1G6jYEfsCNrUY8AQFVzFHsAQ%3D&reserved=0>.
Important dates
* Deadline for submission of position papers: 30 April 2020
* Notification of acceptance/rejection: 25 May 2020
* Workshop: 7 July 2020
Description of Workshop theme
Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a 'good' explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. The principal objective of this workshop is to build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward.
A key goal is to uncover key arguments for and against the computability of explanations for AI related to decision-making that is likely to have major impacts on individuals (socially-sensitive decision-making). In view of the growing complexity and opacity of underlying decision-making processes and the proliferation of automated decision-making systems, it is unsurprising that the notion of explainability is in receipt of close attention -- particularly in light of the GDPR that gave rise to the explainability debate. While explanations are of critical importance for all socially-sensitive decisions (i.e. regardless of whether they are determined through manual or automated processes), this workshop is specifically focused on socially-sensitive decisions made or assisted by AI systems which often involve more complex (e.g. machine learning) and opaque forms (also referred to as black-box algorithms) of underlying decision-making processes.
We ask participants to consider whether explanations for AI can be computable or not. For the purposes of this workshop, we define a 'computable explanation' as follows: Explanation criteria derived from applicable legal and governance frameworks are translated into a set of rules that can be processed by explanation generating algorithms. We are considering the following key issues:
1. The extent in which the process that generates explanations for AI can and should be automated. E.g. what are the key methodologies and principal technical, legal and organizational challenges for generating computable explanations? How does the generation process itself remain accountable? Does it require meaningful human involvement?
2. The principal benefits and limitations of computable explanations in comparison to non-computable explanations for AI as well as other methods for accountability.
Attendees are invited to submit short position papers of no more than three pages to the workshop organizers. Received position papers will be reviewed and accepted/rejected for inclusion in the workshop by the workshop organizers. Where a paper is accepted, the attendee will have the opportunity to present their ideas in 10 minutes during the workshop. We hope to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper.
Organisers
Professor Sophie Stalla-Bourdillon, Interdisciplinary Centre for Law, Internet and Culture (iCLIC), University of Southampton, Southampton, UK
Professor Luc Moreau, Department of Informatics, King’s College London, London, UK
Dr. Laura Carmichael, Interdisciplinary Centre for Law, Internet and Culture (iCLIC), University of Southampton, Southampton, UK
Niko Tsakalakis, Web and Internet Science (WAIS), University of Southampton, Southampton, UK
Dong Huynh, Department of Informatics, King’s College London, London, UK
Dr. Ayah Helal, Department of Informatics, King’s College London, London, UK
All questions about submissions should be emailed to Niko Tsakalakis: N.Tsakalakis at southampton.ac.uk<mailto:N.Tsakalakis at southampton.ac.uk>
======================================
________________________________
From: Susan Davies <sdd1 at soton.ac.uk<mailto:sdd1 at soton.ac.uk>>
Sent: 03 April 2020 12:00
To: Tsakalakis N. <N.Tsakalakis at southampton.ac.uk<mailto:N.Tsakalakis at southampton.ac.uk>>
Subject: RE: WebSci workshop
Hi Niko
Trying to keep sane! Hope you’re safe and well.
Happy to circulate this for you, but would you mind putting what you’d like me to send round in the body of an email, so I can then simply forward on? Also, note that the conference and workshops will now all be online, so you may wish to update your text about this in your Call. The notice has just been put on the website
https://websci20.webscience.org/<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwebsci20.webscience.org%2F&data=01%7C01%7C%7C44edeb29099f47e4988708d7d7c7f72b%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=vrVWz0T3H%2BtFWNH9bYijrTEbd6JQNx6Z6qT0gw%2BPY7Q%3D&reserved=0>
Best,
Susan
__________________________________
Susan Davies
Coordination Manager, Web Science Institute<https://www.southampton.ac.uk/wsi/index.page?>
University Liaison Manager, The Alan Turing Institute<https://www.southampton.ac.uk/wsi/alan-turing-institute/alan-turing-institute.page>
Room 3041, Building 32
Web Science Institute
University of Southampton
Southampton SO17 1BJ
023 8059 3523 | 07768 266464
From: Tsakalakis N. <N.Tsakalakis at southampton.ac.uk<mailto:N.Tsakalakis at southampton.ac.uk>>
Sent: 03 April 2020 11:53
To: Susan Davies <sdd1 at soton.ac.uk<mailto:sdd1 at soton.ac.uk>>
Subject: WebSci workshop
Hi Susan,
I hope you're well and sane during the lock-down!
I've seen the workshops' cfp you have been kindly sending around and I was wondering if there's a way I can get in on the action. Our workshop's details are at https://git.soton.ac.uk/nt1n16/exAI2020 but if you'd like something more than a copy/paste from there let me know and I can draft something.
If this is in your to-do list already, obviously feel free to ignore this email 🙂
Kind regards,
Niko Tsakalakis
Postgraduate student
Web Science Centre for Doctoral Training
University of Southampton
_________________________________
This e-mail and its attachments are intended for the above named only and may be confidential. If they have come to you in error you must not copy or show them to anyone, nor should you take any action based on them, other than to notify the error by replying to the sender.
P please consider the environment - do you really need to print this email?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.ecs.soton.ac.uk/pipermail/turing-southampton/attachments/20200403/a6efadb1/attachment-0001.html
More information about the Turing-Southampton
mailing list