Tobias Alexius: "Self-Concern and Gradual Change"
- Datum
- 28 augusti 2025, kl. 10.15–12.00
- Plats
- Engelska parken, Eng/2-1022
- Typ
- Seminarium
- Arrangör
- Filosofiska institutionen
- Kontaktperson
- Matti Eklund
Högre seminariet i teoretisk filosofi
Tobias Alexius: "Self-Concern and Gradual Change"
Abstract
In this paper/talk I discuss some consequences of attempting to maximize personal utility/quality of life by gradually changing oneself (though e.g. futuristic, transhumanist technologies). I defend the conservative view that one can only maximize personal utility/quality of life by not going through (certain) such changes.
Imagine a person A at a time who undergoes some change, resulting in another person B at time. Whether A and B are the same numerical person will then depend on the kind of change involved. Setting aside the details about which changes do or don’t preserve personal identity (e.g. psychological vs. somatic changes), we can say the following: A is the same person as B iff the changes that have occurred between them fall within A’s Safe Zone (the set of possible changes which A can endure without going out of existence), and that A ≠ B iff the changes fall outside that zone.
Now suppose A offered to cybernetically augment herself in various ways (e.g., upgrade her body/mind using various transhumanist technologies), and that each such augmentation adds one point to her total quality of life score. If A wants to maximize her own quality of life score, what should she then do?
One obvious problem here is that, by cybernetically augmenting herself, A risks crossing beyond her Safe Zone, collapsing her quality of life score to zero. So, the challenge seems to be to pursue just enough augmentations to maximally increase one’s quality of life score whilst staying within one’s Safe Zone.
However, even supposing we knew what that amount was (i.e. had solved the relevant questions in the philosophy of personal identity), something close to a paradox lurks in the vicinity. To illustrate, suppose going through three augmentations keeps A within her Safe Zone, but going through with four takes her out of it. On the basis of this, A decides to go through with three augmentations, resulting in the cyborg B. In this way, A seems to have maximized her own quality of life without risking existential termination.
But now suppose that this resulting cyborg B is given the same choice to upgrade. Applying the same knowledge of persistence over time, B determines that she will survive three but not four augmentations. Thus, B willingly upgrades to the cyborg C.
Of course, the change from A to C takes A out of her Safe Zone, meaning that A no longer exists. Thus, by trying to maximize her quality of life score through a process of gradual change which takes her Safe-Zone into consideration, A has nevertheless ended up setting it to zero. So, what should A have done, assuming she can understand these consequences herself?
Here, I will defend the view that A can only maximize her quality of life score by refusing to go through with any transformations which, if accumulated, could take her out of her Safe Zone. On the basis of this, I will draw out some implications for the ethics of implementing two broad kinds of transhumanist technologies (conservative and revisionary technologies). I will also discuss some of the metaphysical assumptions underlying the above case, such as e.g., the non-transitivity of personal identity, and discuss how problems similar to the above might crop up for alternative views on the metaphysics of persons.