Ethics in AI Seminar: Responsible Research and Publication in AI

Text reads Ethics in AI Seminar Responsible Research and Publication in AI

Ethics in AI Seminar - presented by the Institute for Ethics in AI 

 

https://www.youtube.com/embed/Flr1wgCn7tk

 

Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford CollegeOxford University

 

● What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?

● What does it mean to conduct and publish AI research responsibly?

● What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?

● How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?

AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.

Speakers

 

Image of Rosie Campbell

Rosie Campbell

Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI  Research  . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI)  , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R&D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production  . Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’

 

Image of Dr Carolyn Ashurst

Dr Carolyn Ashurst

Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to  Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, she worked as a data and research scientist in various roles within government and finance. She holds an MMath and PhD from the University of Bath.

 

Image of Dr Helena Webb

Dr Helena Webb

Helena is a Senior Researcher in the Department of Computer Science at Oxford. She is an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. She is interested in the ways that users interact with technologies in different kinds of settings and how social action both shapes and is shaped by innovation. She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. Whilst at Oxford she has worked on projects relating to, amongst others, harmful content on social media, algorithm bias, resources in STEM education, and responsible robotics. Helena is the Research Lead at the newly formed Responsible Technology Institute in the Department of Computer Science. She also co convenes student modules in the Department on Computers in Society and Ethics and Responsible Innovation.

Chair

 

Image of Peter Millican

Professor Peter Millican

Peter is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012.

 

 

Find out more about the full Institute for Ethics in AI programme here.