Tag Archives: ethics

How to make sure your ‘AI for good’ project actually does good by https://ift.tt/37XGUoQ

“Artificial intelligence has been front and center in recent months. The global pandemic has pushed governments and private companies worldwide to propose AI solutions for everything from analyzing cough sounds to deploying disinfecting robots in hospitals.”

View The Original Post

Selected by Galigio

How flexible should constitutions be? A contrasting study between the US and India by https://ift.tt/2HRvV3p

“The United States and India, two of the world’s largest and oldest democracies, are both governed on the basis of written constitutions. One of the inspirations for the constitution of India, drafted between 1947 and 1950, was the US constitution.”

View The Original Post

Selected by Galigio

What are the ethical consequences of immortality technology? by Francesca Minerva and Adrian Rorheim

idea_detail-lucas_cranach_-_der_jungbrunnen_gemaldegalerie_berlin

Detail from The Fountain of Youth (1546) by Lucas Cranach the Elder. Courtesy Wikipedia

Immortality has gone secular. Unhooked from the realm of gods and angels, it’s now the subject of serious investment – both intellectual and financial – by philosophers, scientists and the Silicon Valley set. Several hundred people have already chosen to be ‘cryopreserved’ in preference to simply dying, as they wait for science to catch up and give them a second shot at life. But if we treat death as a problem, what are the ethical implications of the highly speculative ‘solutions’ being mooted?

Of course, we don’t currently have the means of achieving human immortality, nor is it clear that we ever will. But two hypothetical options have so far attracted the most interest and attention: rejuvenation technology, and mind uploading.

Like a futuristic fountain of youth, rejuvenation promises to remove and reverse the damage of ageing at the cellular level. Gerontologists such as Aubrey de Grey argue that growing old is a disease that we can circumvent by having our cells replaced or repaired at regular intervals. Practically speaking, this might mean that every few years, you would visit a rejuvenation clinic. Doctors would not only remove infected, cancerous or otherwise unhealthy cells, but also induce healthy ones to regenerate more effectively and remove accumulated waste products. This deep makeover would ‘turn back the clock’ on your body, leaving you physiologically younger than your actual age. You would, however, remain just as vulnerable to death from acute trauma – that is, from injury and poisoning, whether accidental or not – as you were before.

Rejuvenation seems like a fairly low-risk solution, since it essentially extends and improves your body’s inherent ability to take care of itself. But if you truly wanted eternal life in a biological body, it would have to be an extremely secure life indeed. You’d need to avoid any risk of physical harm to have your one shot at eternity, making you among the most anxious people in history.

The other option would be mind uploading, in which your brain is digitally scanned and copied onto a computer. This method presupposes that consciousness is akin to software running on some kind of organic hard-disk – that what makes you you is the sum total of the information stored in the brain’s operations, and therefore it should be possible to migrate the self onto a different physical substrate or platform. This remains a highly controversial stance. However, let’s leave aside for now the question of where ‘you’ really reside, and play with the idea that it might be possible to replicate the brain in digital form one day.

Unlike rejuvenation, mind uploading could actually offer something tantalisingly close to true immortality. Just as we currently back up files on external drives and cloud storage, your uploaded mind could be copied innumerable times and backed up in secure locations, making it extremely unlikely that any natural or man-made disaster could destroy all of your copies.

Despite this advantage, mind uploading presents some difficult ethical issues. Some philosophers, such as David Chalmers, think there is a possibility that your upload would appear functionally identical to your old self without having any conscious experience of the world. You’d be more of a zombie than a person, let alone you. Others, such as Daniel Dennett, have argued that this would not be a problem. Since you are reducible to the processes and content of your brain, a functionally identical copy of it – no matter the substrate on which it runs – could not possibly yield anything other than you.

What’s more, we cannot predict what the actual upload would feel like to the mind being transferred. Would you experience some sort of intermediate break after the transfer, or something else altogether? What if the whole process, including your very existence as a digital being, is so qualitatively different from biological existence as to make you utterly terrified or even catatonic? If so, what if you can’t communicate to outsiders or switch yourself off? In this case, your immortality would amount to more of a curse than a blessing. Death might not be so bad after all, but unfortunately it might no longer be an option.

Another problem arises with the prospect of copying your uploaded mind and running the copy simultaneously with the original. One popular position in philosophy is that the youness of you depends on remaining a singular person – meaning that a ‘fission’ of your identity would be equivalent to death. That is to say: if you were to branch into you1 and you2, then you’d cease to exist as you, leaving you dead to all intents and purposes. Some thinkers, such as the late Derek Parfit, have argued that while you might not survive fission, as long as each new version of you has an unbroken connection to the original, this is just as good as ordinary survival.

Which option is more ethically fraught? In our view, ‘mere’ rejuvenation would probably be a less problematic choice. Yes, vanquishing death for the entire human species would greatly exacerbate our existing problems of overpopulation and inequality – but the problems would at least be reasonably familiar. We can be pretty certain, for instance, that rejuvenation would widen the gap between the rich and poor, and would eventually force us to make decisive calls about resource use, whether to limit the rate of growth of the population, and so forth.

On the other hand, mind uploading would open up a plethora of completely new and unfamiliar ethical quandaries. Uploaded minds might constitute a radically new sphere of moral agency. For example, we often consider cognitive capacities to be relevant to an agent’s moral status (one reason that we attribute a higher moral status to humans than to mosquitoes). But it would be difficult to grasp the cognitive capacities of minds that can be enhanced by faster computers and communicate with each other at the speed of light, since this would make them incomparably smarter than the smartest biological human. As the economist Robin Hanson argued in The Age of Em (2016), we would therefore need to find fair ways of regulating the interactions between and within the old and new domains – that is, between humans and brain uploads, and between the uploads themselves. What’s more, the astonishingly rapid development of digital systems means that we might have very little time to decide how to implement even minimal regulations.

What about the personal, practical consequences of your choice of immortality? Assuming you somehow make it to a future in which rejuvenation and brain uploading are available, your decision seems to depend on how much risk – and what kinds of risks – you’re willing to assume. Rejuvenation seems like the most business-as-usual option, although it threatens to make you even more protective of your fragile physical body. Uploading would make it much more difficult for your mind to be destroyed, at least in practical terms, but it’s not clear whether you would survive in any meaningful sense if you were copied several times over. This is entirely uncharted territory with risks far worse than what you’d face with rejuvenation. Nevertheless, the prospect of being freed from our mortal shackles is undeniably alluring – and if it’s ever an option, one way or another, many people will probably conclude that it outweighs the dangers.

Francesca Minerva was a guest at a workshop on ‘Personal Identity and Public Policy’ at the Centre for the Study of Existential Risk in November 2016, where she gave a presentation on which this piece is based.Aeon counter – do not remove

Francesca Minerva & Adrian Rorheim

This article was originally published at Aeon and has been republished under Creative Commons.

If I teleport from Mars, does the original me get destroyed? by Charlie Huenemann

idea_sized-v2-spacex_mars_tourism_poster_for_phobos_and_deimos

Courtesy Space X/Wikimedia

I am stranded on Mars. The fuel tanks on my return vessel ruptured, and no rescue team can possibly reach me before I run out of food. (And, unlike Matt Damon, I have no potatoes.) Luckily, my ship features a teleporter. It is an advanced bit of gadgetry, to be sure, but the underlying idea is simplicity itself: the machine scans my body and produces an amazingly detailed blueprint, a clear picture of each cell and neuron. That blueprint file is then beamed back to Earth, where a ‘new me’ is constructed using raw materials available at the destination site. All I have to do is step in, close my eyes, and press the red button…

But there’s a complication: a toggle switch allows me to decide whether the ‘old me’ on Mars is preserved or destroyed after I teleport back home. It’s this decision that is causing me to hesitate.

On the one hand, it seems like what makes me me is the particular way in which all my components fit together. I don’t think there is such a thing as a soul, or some ghost that inhabits my machine. I’m just the result of the activity among my 100 billion neurons and their 100 trillion distinctive connections. And, what’s more, that activity is what it is, no matter what collection of neurons is doing it. If you went about replacing those neurons one by one, but kept all the connections and activity the same, I would still be me. So, replacing them altogether at once should not matter, so long as the distinctive patterns are maintained. This leads me to want to press the button and get back to my loved ones – and back to Earth’s abundant food, water and oxygen, which will allow me to continue repairing and replacing my cells in the slower, old-fashioned way.

So: if I put the toggle in the ‘destroy’ setting, I should survive the transfer just fine. What would be lost? Nothing that plays any role in making me me, in making my consciousness my own. I should step in, press the button – and then walk out of the receiver back on Earth.

On the other hand, what happens if I put the toggle in the ‘save’ setting? Then where would I be? Would I make the trip back to Earth, and then feel sorry for the poor sap back on Mars (the old me), who will be facing slow death by starvation? Or – horrors! – will I be that old me, feeling envy for the new me who is now on Earth, enjoying the company of friends and family?

Could I somehow be both? What would that be like? Would I be seeing the scene on Earth superimposed upon the Martian landscape? Would I be feeling both pangs of hunger and exquisite delight in eating my first home-cooked meal in years? How would I decide at the same time to both walk over the dunes of red sand and go to sleep in my own bed? Is this even conceivable?

A residual conservatism in my nature prompts me to think that I would stay the old me, and the new me – whoever he is – would be like a twin to me, indeed more similar to the old me than any natural twin could possibly be. He would feel all the things I would feel, have the same memories, and be so very glad that he’s not starving on Mars. But, for all that, he would not be me: I would not be thinking or experiencing the things he is, nor would he be aware of my own increasingly desperate experience. But if this line of thinking is correct, I am suddenly very reluctant to turn the toggle over to the ‘destroy’ setting. For then it would seem that I would simply be annihilated on Mars, and some new guy on Earth, some guy a lot like me, would falsely believe he had survived the trip.

But why ‘falsely’? The memories are just as much in his brain as mine, are they not? From his point of view, he experienced stepping into the teleporter, pressing the button, and walking out onto Earth. He’s not lying when he says that that’s what happened. Still: I – the one who steps into the teleporter and presses the button – would not subsequently have this new guy’s experience of walking out onto Earth. My next experience after pressing the button would be – well, it would be no experience at all, as I would be dead.

Perhaps I need to adopt a more objective point of view. Suppose others were observing all this. What would they see? They would see me step in, press the button, and then – depending on the toggle setting – they would see either two copies of me, one on Mars and one on Earth, or else just one copy of me on Earth and some smouldering remains on Mars. There is no real problem, from this outsider’s point of view. There is no test an observer could perform to determine whether I survived the trip to Earth – no personality test, no special ‘me-ometer’ readings, no careful analysis of discrepancies among the neurons. Everything proceeds as expected, no matter what the toggle setting is.

Maybe there is something to be learned from this. Perhaps what seems to me an extremely obvious truth – namely, that there should be some fact to the matter of what I experience once I step in and press the button – is really not a truth at all. Maybe the notion that I am an enduring self over time is some sort of stubborn illusion. By analogy, I once joined a poker club that had been in existence for more than 50 years, with a complete change in its membership over that time. Suppose someone were to ask whether it was the same club. ‘It is and it isn’t,’ would be the sensible reply. Yes: the group has met continuously each month over 50 years. But no: none of the original members are still in it. There is no single, objective answer to the poker-identity question, since there is no inner, substantive soul to the club that has both remained the same and changed over time.

The same goes, perhaps, for me. I think I have been the same thing, a person, over my life. But if there is no inner, substantive me, then there is no fact to the matter about what my experience will be when ‘I’ press the button. It is just as the observer says: first there was one, and then there were two (with the toggle set to ‘save’), each thinking himself to be the one. There is no fact about what ‘the one’ really experienced, because ‘the one’ wasn’t there to begin with. There was only a complex arrangement of members, analogous to my poker club, thinking of themselves as belonging to the same ‘one’ over time.

Small comfort that is. I went into this problem wondering whether I could survive – only to find out that I am not, and never was! And yet the decision still lies before me: do I – do we? – press the button?

Note: I make no claim to originality in this thought experiment. A very similar sort of question was raised in 1775 by the Scottish philosopher Thomas Reid, in a letter to Lord Kames referencing Joseph Priestley’s materialism: ‘whether when my brain has lost its original structure, and when some hundred years after the same materials are again fabricated so curiously as to become an intelligent being, whether, I say, that being will be me; or, if two or three such beings should be formed out of my brain, whether they will all be me’. I first encountered it, with the Martian setting, in the preface to the essay collection ‘The Mind’s I’ (1981), edited by Douglas Hofstadter and Daniel Dennett. The British philosopher Derek Parfit made much hay out of the idea in his book ‘Reasons and Persons’ (1984). And the podcaster C G P Grey provides an insightful illustration of the problem in his video ‘The Trouble with Transporters’ (2016).Aeon counter – do not remove

Charlie Huenemann

This article was originally published at Aeon and has been republished under Creative Commons.