12th Oxford Workshop on Global Priorities Research

19-20 June 2023, Oxford

Topic

Global priorities research investigates the question, ‘What should we do with our limited resources, if our goal is to do the most good?’ This question has close connections with central issues in philosophy and economics, among other fields.

This event will focus on the following two areas of global priorities research:

  1. The longtermism paradigm. Longtermism claims that, because of the potential vastness of the future of sentient life, agents aiming to do the most good should focus on improving the very long-run future, rather than on more immediate considerations. We are interested in articulating, considering arguments for and against, and exploring the implications of this longtermist thesis.
  2. General issues in cause prioritisation. We will also host talks on various cause prioritisation issues not specific to longtermism - for instance, issues in decision theory, epistemology, game theory and optimal timing/optimal stopping theory.

These two categories of topics are elaborated in more detail in Sections 1 and 2 (respectively) of GPI’s research agenda.

Agenda

Day 1, Plenaries

TimeSession
10:15Introduction and panel on research groups in global priorities research
17:00Animal welfare panel

Day 1, Session 1
Day 1, Session 2
Day 1, Session 3
Day 2, Session 1
Day 2, Session 2
Day 2, Session 3
Day 2, Plenary

Talk details

Monday 19 June, 2023

10:15 – Introduction and panel on research groups in global priorities research 

In this panel, you will hear from leaders of six research groups within global priorities research about who they are and what they study. These groups include the Global Priorities Institute, EA Psychologists, the Legal Priorities Project, the Population Wellbeing Initiative, the Wild Animal Welfare Program, and the Mind, Ethics, and Policy Program. 

12:10 – Rush Stewart, “Choice, Freedom, and Norms: Outline of a Theory of Coercive Menu Expansion” 

A common objection to legalizing certain types of markets—in sex, organs, sorts of medical care—is that it would result in the coercion of some participants. This complaint raises a general puzzle: how can expanding the set of options an agent has be coercive? I propose a solution in terms of external norms that constrain choice. I axiomatically characterize norm-sensitive generalizations of two prominent ways of assessing the opportunity freedom that a set of options provides. Each assessment method, once generalized to be sensitive to external norms, witnesses the possibility that menu expansion can reduce freedom. I suggest there are lessons for thinking about policy interventions that aim to do good. 

12:10 - Sangita Vyas, “Long-term population projections: Scenarios of low or rebounding fertility 

The size of the human population is projected to peak in the 21st century. But quantitative projections past 2100 are rare, and none quantify the possibility of a rebound from low fertility to replacement-level fertility. Moreover, the most recent long-term deterministic projections were published a decade ago; since then there has been further global fertility decline. Here we provide updated long-term cohort-component population projections and extend the set of scenarios in the literature to include scenarios in which future fertility (a) stays below replacement or (b) recovers and increases. We also characterize old-age dependency ratios. We show that any stable, long-run size of the world population would persistently depend on when an increase towards replacement fertility begins. Without such an increase, the 400-year span when more than 2 billion people were alive would be a brief spike in history. Indeed, four-fifths of all births---past, present, and future---would have already happened. 

12:10 – Vincent Conitzer, “Foundations of Cooperative AI

AI systems can interact in unexpected ways, sometimes with disastrous consequences. As AI gets to control more of our world, these interactions will become more common and have higher stakes. As AI becomes more advanced, these interactions will become more sophisticated, and game theory will provide the tools for analyzing these interactions. However, AI agents are in some ways unlike the agents traditionally studied in game theory, introducing new challenges as well as opportunities. We propose a research agenda to develop the game theory of highly advanced AI agents, with a focus on achieving cooperation. 

12:30 – Kevin Kuruc, “The economic consequences of depopulation 

This talk provides an overview on the economics of depopulation. First, I argue that two of the most commonly understood (welfare) costs and benefits of depopulation—climate change and innovation benefits of larger populations—are misunderstood. Instead, I argue that the welfare-relevant potential costs of small and shrinking populations come through agglomeration losses and/or ‘speed of history’ effects, the latter of which is novel to this paper. I conclude that there are strong reasons to believe a smaller population size has important ex-ante welfare costs. 

12:50 – Michael Geruso, “Depopulation won’t fix itself: Why responding to depopulation will be a challenge 

In this paper we discuss the possibility that depopulation might “self correct,” with fertility rates rising to replacement or above-replacement levels without attention and investment towards this global challenge. We review the relevant social scientific facts on the role of technological advances in reproduction; improvements in longevity, morbidity, and overall living standards; high fertility religious subpopulations; and other notions of automatic demographic, individual, or social responses. The collage of evidence from history, economics, and demography offers no good reason to believe that global birth rates will self-correct to replacement levels or higher. We further discuss the history of government failures in responding to the challenge of low (and previously, high) fertility, reviewing evidence that government track records are poor—both in liberal governments that have offered relatively small supports for parents, and in authoritarian regimes that have used coercion and violence in an attempt to force people to have children they don’t want to have or not to have children they do want to have. 

12:50 - Rossa O'Keeffe-O'Donovan, “Adopt a paper: research ideas to test the longtermist hypothesis 

Economics research has the potential to further test the 'longtermist hypothesis', that some actions and policies will have large, very long run effects that are estimable today. I explain why such research is hard but potentially extremely valuable. I then set out a handful of concrete research ideas that I think are promising for further development - please come to this talk if you're interested in 'adopting' one of them! 

12:50 – Shlomi Segall, “To Be (Disadvantaged) Or Not to Be: An Egalitarian Guide for Creating New People 

The late Derek Parfit held that in evaluating the future, we should ignore the difference between necessary persons and merely possible persons. This paper looks at one of the most prominent alternatives to Parfit’s view, namely Michael Otsuka and Larry Temkin ‘Shortfall Complaints’ view. On that view we aggregate future persons’ wellbeing and deduct intra-personal shortfall complaints, giving extra weight to the complaints of necessary persons. This paper offers a third view. It rejects Parfit’s No Difference View in that it registers a difference between necessary and possible persons. But it also rejects the Shortfall View and replaces its intra-personal complaints with an inter-personal complaints mechanism. It argues that the value of a population is its aggregate prioritarian value minus the egalitarian complaints that necessary persons hold. I show that the egalitarian view has all the explanatory power of the Shortfall view in easy cases, while significantly improving on it in three sorts of tough cases. 

15:00 – Harry R. Lloyd, “The 'My Preferred Theory' approach to moral uncertainty 

In this presentation, I'll propose a new theory of appropriate choice under conditions of moral uncertainty, which I will refer to as the 'My Preferred Theory' approach. This new theory is inspired by Gustafsson and Torpman's 'My Favourite Theory' approach. However, it is designed to sidestep MacAskill, Bykvist and Ord's 'theory individuation objection' to My Favourite Theory. I'll begin the talk by motivating My Preferred Theory; in the middle, I'll spell out the details; and at the end, I'll discuss the implications for arguments in favour of longtermism in light of moral uncertainty. 

15:00 – Dean Spears, “Is humanity 4/5 over?: The importance of depopulation for global priorities research 

Longtermists should spark a dialogue with population science and population economics. Many longtermists would agree that, to eventually achieve a flourishing far future, it is valuable that over the coming few centuries a complex global economy endures and the number of people does not become small enough to be vulnerable to extinction from a threat that a larger population could sustain. But we have seen that fertility rates that are normal in much of the world today would cause population decline that is faster and to lower levels than is commonly understood, threatening the long-term future. Is that path ok? Under many plausible accounts of population ethics–including those shared by many longermists–this would be a disaster for wellbeing. Given this, and if policy responses as usual won’t work, what might? In this talk we sketch two possibilities: A larger public investment than any past experience would permit is to evaluate empirically (so far), and a social movement focused on what people value. Either way, two next steps are needed: Much more research about depopulation and fertility, and efforts to draw political attention to the Spike and our response to it. Researchers and activists rose to the challenges of climate change long before either the technological tools or the political will were available to change humanity’s course; the Spike demands, again, that we get started now on an uncertain challenge decades in humanity’s future. 

15:00 – Maximilian Kasy, “The political economy of AI: Towards democratic control of the means of prediction 

This chapter discusses the regulation of artificial intelligence (AI) from the vantage point of political economy, based on the following premises: (i) AI systems maximize a single, measurable objective. (ii) In society, different individuals have different objectives. AI systems generate winners and losers. (iii) Society-level assessments of AI require trading off individual gains and losses. (iv) AI requires democratic control of algorithms, data, and computational infrastructure, to align algorithm objectives and social welfare. The chapter addresses several debates regarding the ethics and social impact of AI, including (i) fairness, discrimination, and inequality, (ii) privacy, data property rights, and data governance, (iii) value alignment and the impending robot apocalypse, (iv) explainability and accountability for automated decision-making, and (v) automation and the impact of AI on the labor market and on wage inequality. 

15:30 – Q&A for symposium "The Spike: Assessing low fertility and depopulation as a cause for global priorities research and longtermism" 

Question and answer session with all speakers of the symposium "The Spike: Assessing low fertility and depopulation as a cause for global priorities research and longtermism" 

15:40 – Harvey Lederman, “Incompleteness, uncertainty, and negative dominance 

I develop a new problem for choices under uncertainty with incomplete preferences. 

15:40 - Wim Naudé, “The Future Economics of Artificial Intelligence: Mythical Agents, a Singleton and the Dark Forest 

This paper contributes to the economics of AI by exploring three topics neglected by economists: (i) the notion of a Singularity (and Singleton), (ii) the existential risks that AI may pose to humanity, including that from an extraterrestrial AI in a Dark Forest universe; and (iii) the relevance of economics' Mythical Agent (homo economicus) for the design of value-aligned AI-systems. Three implications for how we govern AI and insure against potential existential risks follow. These are (i) accelerating the development of AI as a precautionary step; (ii) maintaining economic growth until we attain the wealth and technological levels to create AGI and expand into the galaxy; and (iii) putting more research and practical effort into solving the Fermi Paradox- including the search for ETI. Several areas where economists can contribute to these three implications are identified. 

15:50 - Jacob Barrett, “In Defense of Moderation 

Fanatical decision theories allow tiny probabilities of enormous value to swamp our decision-making. Moderate decision theories do not. In this paper, I defend moderation. Specifically, I argue that we should discount tiny probabilities of enormous value, where the boundary between the probabilities we should and should not discount is vague. 

17:00 – Animal welfare panel 

In this panel, you will hear from experts in philosophy, economics, and psychology who specialize in animal welfare research. They will share their perspectives on why animal welfare research is important and neglected. You will also learn about perspectives from these fields on the most pressing questions and promising avenues for future research on animal welfare. 

Tuesday 20 June, 2023

09:40 – Maya Eden, “The Normative Content of Other-Regarding Preferences” 

People care about each other. They care about their families and friends, and also about strangers. In this paper, we ask whether these feelings have any normative content. We show that, for sufficiently large populations, people's aversion to income inequality among strangers places tight bounds on the plausible amount of inequality aversion in a Paretian social welfare function. In contrast, people's degree of egoism and their altruistic feelings towards their families do not. Our results also suggest a new rationale for paternalism: when people are paternalistic with respect to the choices of others, the social welfare function must be as well. This is joint work with Paolo Piacquadio. 

09:40 – Andreas Mogensen, “Welfare and Felt Duration 

What do we mean when we speak of how long a pleasant or unpleasant sensation lasts, insofar as its duration determines how good or bad the experience is overall? Given that we seem able to distinguish between subjective and objective duration and that how well or badly someone’s life goes is naturally thought of as something to be assessed from her own perspective, it seems intuitive that it is subjective duration that determines how good or bad an experience is from the perspective of an individual's welfare. However, I argue that we know of no way to make sense of what subjective duration consists in on which it is plausible that subjective duration modulates welfare. Moreover, some plausible theories of what subjective duration consists in strongly suggest that subjective duration is irrelevant in itself. 

09:40 – Lucius Caviola, “Toward a psychology of technology-driven global catastrophic risk 

Human psychology is ill-equipped to responsibly handle extremely powerful emerging technologies with the potential for misuse and destruction. And it is human decision-makers who will determine whether technology-driven global catastrophic risks materialize, be they entrepreneurs, policy makers, scientists, activists, philanthropists, concerned citizens. The human sense of caution is calibrated for risks that are relatively frequent, predictable, small-scale, and personal. Now we are confronted with risks that are unprecedented, highly uncertain, extremely large-scale, and collective. To understand how risks with these features might occur and how to prevent them, we require new insights about human psychology. 

10:00 – Matthew Coleman, “Lay beliefs about the likelihood and prioritization of human extinction 

Many moral philosophers view human extinction as the worst possible outcome for our species. Why doesn’t modern society prioritize reducing existential risks more? In this talk, I will present a range of findings about people's empirical and normative beliefs regarding human extinction. For example, despite believing human extinction is 5% likely this century, people believe the odds would need to be 30% in order to be the top societal priority. These findings were consistent between US and Chinese participants. Additionally, interventions that taught expected value and increased scope sensitivity were largely unsuccessful in changing prioritization beliefs. Taken together, this work suggests that people’s beliefs about human extinction are largely driven by intuitions that are difficult to change. 

10:00 – Valentina Bosetti, “Forward‑Looking Belief Elicitation Enhances Intergenerational Beneficence 

One of the challenges in managing the Earth’s common pool resources, such as a livable climate or the supply of safe drinking water, is to motivate successive generations to make the costly effort not to deplete them. In the context of sequential contributions, intergenerational reciprocity dynamically amplifies low past efforts by decreasing successors’ rates of contribution. Unfortunately, the behavioral literature provides few interventions to motivate intergenerational beneficence. We identify a simple intervention that motivates decision makers who receive a low endowment. In a large online experiment with 1378 subjects, we show that asking decision makers to forecast future generations’ actions considerably increases their rate of contribution (from 46% to over 60%). By shifting decision makers’ attention from the immediate past to the future, the intervention is most effectivein enhancing intergenerational beneficence of subjects who did not receive a contribution from their predecessors, effectively neutralizing negative intergenerational reciprocity effects. We provide suggestive evidence that the attentional channel is the main channel at work. 

10:20 – Chen Yiting, “Uncertainty Motivates Morality 

We propose an uncertainty-motivated morality hypothesis whereby individuals behave more morally under uncertainty than under certainty, as if moral behavior will yield a better outcome as the result of uncertainty. We test this hypothesis in a series of experiments and observe that individuals are more honest under uncertain situations than degenerate deterministic situations. We further show that this pattern is best explained by our hypothesis and is robust and generalizable. These results are incompatible with standard models and consistent with quasi-magical thinking and related notions. Our study contributes to the literature of decision-making under uncertainty and with moral considerations. 

10:20 - Carter Allen, Johanna Salu, “Empirical evidence for the unilateralist’s curse and low risk neglect 

In this talk, we will introduce two psychological biases relevant to global catastrophic risk. First, we will present an economic game for modeling how people behave in group decisions where any one member can impose some outcome on everyone (unilateralist’s curse situations; e.g., any researcher can leak a dangerous finding). Using this paradigm, we show that people overlook a crucial reason to avoid unilaterally imposing outcomes: because a decision to impose an outcome only matters when nobody else imposes it, such a decision is more likely to matter when it is bad than when it is good. Second, we will consider how people perceive the value of reducing the chance of high-probability risks (e.g., from 91% to 90%) vs. low-probability risks (e.g. 10% to 9%). Most catastrophic risks are low-probability events, yet their large potential for harm makes them important to address. We find that people are less likely to act on low-probability risks than high-probability risks, even when the expected value of both actions is equal. This highlights the need to understand the psychological drivers — fear vs. complacency, hope vs. futility — that govern subjective perception of risk reduction at different likelihoods. 

10:20 - Ina Jäntgen, “Causal persistence and the study of long-run effects 

Moral philosophers have suggested that benefitting people in the very long run future may be of great moral importance. This view has major implications for policymaking and philanthropic decisions. Could such long-term-oriented decisions be based on scientific evidence? In particular, to benefit anyone in the long-run future, interventions (such as designing new institutions) should have particularly persistent effects – they should causally influence the long-run future. But how, if at all, could scientists provide evidence for the persistence of causal effects? In this talk, I define three ways in which causal effects can be persistent, argue that common scientific methods used to study long-run effects are ill-suited to provide evidence for the persistence of these effects and provide guidance for improving the study of causal persistence. 

10:40 – Joshua Lewis, “Cluelessness and False Certainty When Certain Consequences Are Salient 

Some actions have innumerable consequences that are difficult to predict, aggregate, or compare. How do people navigate this complexity? People judge options primarily based on their most salient consequences. Predictable consequences tend to be more salient than unpredictable consequences. Together, these two factors have two implications. First, false certainty. People perceive the value of possible actions as more predictable than they really are; they have false certainty in their evaluation of possible actions due to focusing on their most predictable consequences. Second, measurability bias (a.k.a. McNamara Fallacy). People prefer actions whose measurable consequences are favorable, controlling for the actions' expected value and aggregate unpredictability. In terms of global prioritization, policymakers may overvalue policy solutions whose immediate effects are positive and measurable relative to more robustly positive policy solutions with no measurable effects. 

11:40 – Hilary Greaves, “The significance of deontic constraints for bystanders” 

Non-consequentialist moral philosophers usually recognize deontic constraints on pursuit of the good. Consider a case in which agent P performs act X, which is wrong in that it violates deontic constraint C. Suppose that agent B (a “bystander”) could intervene to prevent P from performing X. The fact that P has strong moral reason not to do X leaves it open whether or not B has moral reason (strong or otherwise) to prevent P from doing X. My talk will explore this issue. One application of particular interest concerns the moral case (or lack of it) for fighting injustice per se, over and above the moral case for preventing the harms that are associated with said injustice. My conclusion might be that that case is slim to non-existent.  

11:40 – Heather Browning, “Are there multiple relevant welfare thresholds? 

Many decision-making contexts, such as population ethics, social policy, and distributive procedures, make reference to different thresholds of welfare (e.g. a life worth living or a good life). Here, I will look at what I take to be the best candidates for these welfare thresholds, as well as the theoretical and empirical considerations that justify their use, and the contexts in which they are most likely to be relevant. 

11:40 – Leora Sung, “Should I Give or Save? 

We are typically near-future biased, being more concerned with our near future than our distant future. This near-future bias can be directed at others too, being more concerned with their near future than their distant future. In this paper, I argue that, because we discount the future in this way, beyond a certain point in time, we morally ought to be more concerned with the present well-being of others than with the well-being of our distant future selves. It follows that we morally ought to sacrifice our distant-future well-being in order to relieve the present suffering of others. I argue that this observation is particularly relevant for the ethics of charitable giving, as the decision to give to charity usually means a reduction in our distant-future well-being rather than our immediate well-being. 

12:20 – Alan Hájek, “Consequentialism, Cluelessness, Clumsiness, and Counterfactuals 

According to objective consequentialism, a morally right action is one that has the best consequences. I will argue that on one understanding this makes no sense, and on another understanding, it has a startling metaphysical presupposition concerning counterfactuals. Objective consequentialism has faced various objections, including the problem of “cluelessness”: we have no idea what most of the consequences of our actions will be. I think that on these understandings, objective consequentialism has a far worse problem: its very foundations are highly dubious. Even granting these foundations, a worse problem than cluelessness remains, which I call “clumsiness”. Moreover, I think that these problems quickly generalise to a number of other moral theories. But the point is most easily made for objective consequentialism, so I will focus largely on it. I will consider three ways that objective consequentialism might be improved:  1) Appeal instead to short-term consequences of actions;  2) Understand consequences with objective probabilities;  3) Understand consequences with subjective/evidential probabilities. 

12:20 - Falk Lieder, “Extending cause prioritization to research in the behavioral sciences 

Answering crucial questions of the psychological and behavioral sciences can improve our ability to positively influence decisions critical for existential risk and the well-being of future generations. However, identifying crucial research topics is very challenging. Moreover, whether the cost-effectiveness of psychological research is higher or lower than that of established causes is still to be determined. We are developing a general data-driven method for predicting a research project's potential impact and cost-effectiveness with probabilistic causal models to address these questions. This method makes it possible to measure the value of research in the units commonly used to quantify the effectiveness of interventions for global health and development, such as QUALYs and WELLBYs.  I will summarize a series of proof-of-concept analyses illustrating that our method can not only be used to evaluate the cost-effectiveness of completed research and development projects but also to forecast the cost-effectiveness of i) deploying new interventions, ii) evaluating interventions, and iii) conducting research that might lead to new interventions. Our preliminary results suggest that psychological research on fostering altruistic contributions can be more than 100 times as cost-effective as investing directly in global (mental) health.  

12:20 – Tomi Francis, “The Scale Argument for Longtermism”

"Future people count. There could be a lot of them. We can make their lives better." How might this argument be made precise? I present one attempt, which I call the "Scale Argument for Longtermism". I first consider the axiological case. I show that, provided there are sufficiently many future people who we can affect with some small but non-negligible probability, Longtermism follows if future people matter in only the very weak sense that it is better to prevent a small probability of many of them suffering than it is to prevent a small probability of one present person from losing a trivial amount of wellbeing. I then consider the deontic case. This is more difficult, as we cannot assume transitivity. But two arguments can still be provided. One appeals to a kind of replication consistency: if we should prevent n future persons rather than one present person from suffering a harm, then we should prevent kn future persons rather than k present persons from suffering a harm. The other appeals to a principle of agglomeration: if we should perform each of a sequence of actions, regardless of whether we have performed other actions in the sequence, then we should perform a single action with the same results as the entire sequence. 

15:00 – Loren Fryxell, “The average location of impact: a new index and how to use it 

The benefits of a policy change are often quite heterogeneous across several dimensions of interest. Two important dimensions are 1) the income level of the beneficiaries and 2) the time at which the benefits occur. Summary statistics allow one to capture an important feature of the underlying distribution with a single number. We propose a summary statistic that measures the average location (e.g., income level or time) of a policy's impact, weighted by the amount of benefit occurring at each location. Importantly, the usual arithmetic mean cannot be used since benefits may be negative. We generalize the notion of an arithmetic mean to allow for both positive and negative weights. We call this the *trimmed mean*. We show that for absolutely continuous distributions (those with a density), the trimmed mean exists and is unique. We discuss applications to inequality (using income level) and longtermism (using time). This is joint work with Charlotte Siegmann. 

15:00 – Rachell Powell, “Taking the Long View: Paleobiological Perspectives on Longtermism 

The biggest challenge to longtermism is not so much the normative assumptions it makes but the predictive power it requires. Unlike standard intergenerational ethics, longtermism endeavors to integrate deep temporal scales—or what I will refer to as 'macroevolutionary futures'—into our ethical decision-making and policy. The ability to forecast macroevolutionary futures is critical, in particular, for the axiological study of human extinction, a central preoccupation of many longtermist philosophies. In order to make meaningful predictions about the future trajectory of life on Earth, we must look to the patterns and processes that govern the paleo past. Unfortunately, nearly everything we know about natural history suggests that morally relevant predictions at macroevolutionary scales are simply not feasible. In this talk, I will explain why decades of work in evolutionary theory leads me to this pessimistic conclusion. At the same time, however, this analysis will vindicate longtermism in one fundamental respect, for it suggests that the extinction of human beings would indeed be a moral travesty for the cosmos. 

15:00 – Jeff Sebo, “Insects, AI systems, and the future of legal personhood 

In this talk I explore the idea of extending legal personhood to insects, AI systems, and other beings at the margins of our moral and legal circles. I start by developing a normative premise: If a being has a non-negligible chance of being sentient or otherwise significant, then they merit moral and legal consideration for their own sake. I then develop an empirical premise: Insects, AI systems, and many other beings have a non-negligible chance of being sentient or otherwise significant. Finally, I ask what follows for our moral and legal frameworks, and I consider whether it makes more sense to extend our current conception of legal personhood to these new populations or to develop new conceptions of legal standing for these new populations. (This talk is part of a symposium hosted by the NYU Mind, Ethics, and Policy Program.) 

15:20 – Julian Jamison, “Can We Identify Mistakes? 

According to the neoclassical revealed preference assumption, agents don't make [ex ante] mistakes, other than perhaps via 'trembling hands' (which are not identifiable to an outside observer). Once we venture beyond that comforting but unrealistic approach, we can potentially acknowledge mistakes and helpfully try to minimize them, however we also run into the complication of possibly conflicting welfare assessments. This necessitates a more meta-level conceptual framework building on decision theory, choice data, surveys (e.g. regret), neuro-physiological measures, evolutionary psychology, and/or moral philosophy. Warning: I do not have any results yet, just some ideas and questions. 

15:20 – Matti Wilks, “The psychology of moral circle expansion 

The moral circle represents the boundaries of our moral consideration. We place entities worthy of moral concern inside the circle, and entities not worthy outside. In recent years there has been a growing body of psychological research aiming to map and understand our moral circles. In this talk I will review key findings, highlight limitations, and propose new questions for the future study of moral circle expansion. 

15:20 – Robert Long, “A framework for estimating the probability of AI sentience 

In this talk I develop a framework for estimating the probability of AI sentience. Given the problem of other minds, we might or might not ever be able to know for sure whether AI systems can be sentient. However, we can clarify our thinking about this topic as follows: First, we can ask how likely particular capacities are to be necessary or sufficient for sentience, and second, we can ask how likely current or near future AI systems are to possess these capacities, given the evidence. I suggest that when we clarify our thinking about this topic in this way, we discover that we need to make surprisingly conservative estimates to generate the conclusion that current or near future AI systems have only a negligible chance of being sentient. (This talk is part of a symposium hosted by the NYU Mind, Ethics, and Policy Program.) 

15:40 – Itai Sher, “Generalized welfare weights 

Generalized social marginal welfare weights have been proposed as a method for incorporating broader ethical values, such as desert, equal opportunity, and libertarian values, into the analysis of optimal tax policies. A similar approach could be applied to other kinds of policies. The approach evaluates local tax changes without a global social objective, aggregating gains and losses to different people on the basis of broad ethical criteria. I show that local tax policy comparisons implicitly entail global comparisons. Moreover, whenever welfare weights do not have a utilitarian structure, these implied global comparisons are inconsistent. I argue that broader ethical values cannot in general be represented simply by modifying the weights placed on benefits to different people and, if one is to take plural values seriously in economic analysis, a more thoroughgoing modification of the utilitarian approach is required. I discuss some possible ways forward. 

15:40 – Zach Freitas-Groff, “Persistence of Policy Choices: The Case of Close Referendums 

Using a novel dataset, I produce the first systematic empirical evidence on how long policy choices matter and why. Looking at the statutory history of close U.S. state-level referendums since 1900, I find that narrowly-passed laws are 40 percentage points more likely to be operative than those that narrowly fail a century later. A game-theoretic model, data on referendum timing, and heterogeneity analysis suggest that political attention drives persistence. This contrasts with mechanisms discussed in existing literature, which largely focus on endogenous responses to existing policy. At an econometric level, the results lend support to long-term policy event-study designs. At a practical level, the results indicate that common policy choices have very long-lasting effects. 

15:40 – Becca Franks, “Be nice to your AI system: rationales and risks for considering AI welfare 

Despite many uncertainties surrounding AI systems, at least two things are clear: 1) Individual humans can now interact with synthetic partners that present sentient-like characteristics and 2) The ability of these synthetic partners to appear sentient is increasing. What does it mean to take the welfare of our synthetic partners into account? What risks are involved if we overestimate their welfare needs? What risks are involved if we underestimate them? Taking lessons from animal welfare science, this talk will explore general principles for developing AI welfare standards including: embracing multidisciplinarity and pluralism, confronting anthropocentrism, and considering the possibility of mutual flourishing. (This talk is part of a symposium hosted by the NYU Mind, Ethics, and Policy Program.) 

16:00 – Bob Fischer, “Are interspecies and intersubstrate welfare comparisons tractable? 

Humans regularly need to make decisions that involve interspecies welfare comparisons, and in the future, we might also need to make decisions that involve intersubstrate welfare comparisons. If we want to make these decisions in a principled way, we need a method for comparing welfare impacts despite significant differences between the individuals whose welfare it at stake. In this talk, I present a method for making interspecies welfare comparisons by estimating differences in animals’ welfare ranges, understood as the difference between the best and worst welfare states that a given animal can realize. I then consider the prospects for extending this method to intersubstrate cases, flagging both key challenges and some reasons for optimism. (This talk is part of a symposium hosted by the NYU Mind, Ethics, and Policy Program.) 

17:00 – A Fireside Chat with Holden Karnofsky: “Pressing questions that academics can help to answer 

In this panel, Adam Bales (Global Priorities Institute) and Charlotte Siegmann (MIT) will chat with Holden Karnofsky (co-CEO of Open Philanthropy) about what he believes are some of the most pressing questions that academics (especially in economics, philosophy, or psychology) can help to answer.