There is increasing interest in the pursuit to ensure that Artificial Intelligence (AI) systems are not only ethically sound, but that the decisions they make are safe, and conducive to humanity. Traditionally, this problem is viewed as the pursuit to embed human values in the decision architecture of the AI system, but if we extend this problem to embedded AI – aka smart robots – how can we begin to embed human values in a robot?
One answer to this question is to look – cognitively – at what moral decision-making constitutes and then attempt to model the mechanism that brings about this phenomenon. Bringing together perspectives from psychology, philosophy as well as robotics and computing, the aim of this workshop is to begin to create a community of researchers/academics/industry interested in addressing this problem. The following questions will drive this workshop:
- What constitutes ‘moral decision-making’?
- Why (if at all) do we need machines with the ability to tell between right and wrong?
- What is the best way to model morality?
- What further research needs to be done to develop machines with moral agency?
- The day will be composed of five keynote presentations followed by a hackathon-style session where we look to answer some of these questions.
Event programme
09:00 | Arrival and registration |
09:30 | Host introduction and workshop overview |
10:00 | 1st keynote presentation |
10:45 | Break |
11:00 | 2nd keynote presentation |
11:45 | 3rd keynote presentation |
12:30 | Lunch and university tours |
13:30 | 4th keynote presentation |
14:15 | 5th keynote presentation |
15:00 | Break |
15:15 | Discussion and consolidation exercise |
17:00 | Close |
18:30 | Reconvene in Milton Keynes centre for evening meal |
Location and travel details
Ideas Space, AIRC, SWAG合集, College Road, Cranfield, Bedfordshire, MK43 0AL
Who should attend
Anyone with an interest in this area.
Cost
Free to attend.How to register
.