The Guardian Liberty Voice conducted an online interview with prominent transhumanist and author Gennady Stolyarov. Stolyarov recently wrapped up a campaign to raise money for distribution of his children’s book Death Is Wrong to children across the U.S. After learning of his campaign, GLV was inspired to delve deeper into the issues of transhumanism and the Singularity from the perspective of those who have have concerns about how future technologies will affect our lives. We also asked Stolyarov about Google Glass and chief engineer at Google, Ray Kurzweil. Stolyarov gave tremendous insight into these issues and more.
GLV: Google chief engineer Ray Kurzweil is a transhumanist. He is working toward the Singularity and the day when computers become smarter than people. Many people have grave concerns about the safety of this. Do you know what steps are being taken to ensure that when AI intelligence supersedes human intelligence, that we will be able to control it, and/or that it will definitely be benign?
Stolyarov: I would question the ethics of attempting to “control” an intelligence that is truly sentient, conscious, and distinct from human intelligence. Would this not be akin to enslaving such an intelligence? As regards the intelligence being benign, there is no way today to ensure that any human intelligence will be benign either, but the solution to this is not to limit human intelligence. Rather, the solution is to provide external disincentives to harmful actions. Any genuinely autonomous intelligence should be recognized to have the same rights as humans (e.g., rights to life, liberty, pursuit of happiness, etc.) while also being subject to the same prohibitions on initiating force against any other rights-bearing entity. Furthermore, I think it is not correct to assume that intelligent AI would have any reasons to be hostile toward humans. For a more detailed elaboration, I would recommend the article “The Hawking Fallacy” by Singularity Utopia: http://www.singularityweblog.com/the-hawking-fallacy/. Here is a relevant excerpt: “Artificial intelligence capable of outsmarting humans will not need to dominate humans. Vastly easier possibilities for resource mining exist in the asteroid belt and beyond. The universe is big. The universe is a tremendously rich place.” The fact that humans evolved from fiercely competitive animals that often viewed the world in a zero-sum manner, does not mean that non-human intelligence will possess inclinations toward zero-sum thinking. Greater intelligence tends to correspond to greater morality (since rational thinking can avoid many sub-optimal and harmful choices), so intelligence itself, in any entity, can go a long way toward preventing violence and destruction.
GLV: What steps are being taken to ensure that people’s privacy will be protected if we merge with machines?
Stolyarov: Many people have already merged with machines in the form of prosthetic limbs, artificial organs, hearing aids, and even more ubiquitous external devices that help augment human memory or protect us from the elements. Almost none of these devices pose privacy concerns, any more than just being out in public would pose such concerns. I think virtually every technologist recognizes, for example, that having an artificial heart that is connected to an open network and whose configuration could potentially be directly altered by another user, would probably not be a good idea. The biggest protection of privacy in this area is common sense in how the technologies would be designed and deployed. Merger with machines is already a reality today, and the machines are genuinely part of us. As long as a system of private property remains, and the machines that augment an individual are considered that individual’s property and remain physically under that individual’s control, I think privacy is not diminished in any way. Consumer demand is also important to consider. Very few consumers would agree to purchase any kind of machine augmentation if they saw it to have severe risks to their privacy.
GLV: What steps are being taken to ensure that this new technology that will exist in the body will definitely not be vulnerable to hackers?
Stolyarov: While 100% guarantees do not exist in most areas of life, the design of any given technology can reduce its potential to be hacked. I would expect that any technology that exists in the body and needs to electronically communicate with other devices for any reason would do so using some sort of end-to-end encryption of the signal to prevent its interception by external parties. Also, it is important to keep in mind that such devices, if they communicate, would do so over channels that are distinct from those available to the general public. I do not think any inventor would design an organ that communicated with another device using the Internet that you and I use to communicate via e-mail. They would have their own dedicated, closed network on which they would send encrypted signals.
GLV: What is to become of people who want to opt out of merging with machines? Or people who want to opt out of any further technology? How can the leaders of transhumanism promise that people who want to remain human will not be discriminated against or be viewed as second class citizens?
Stolyarov: Transhumanists do not oppose those who wish to personally opt out of any technologies – including the Amish who reject many technologies that are less than 100 years old. While transhumanists might seek to voluntarily persuade others to adopt life-enhancing technologies, I am not aware of any transhumanist who seriously wishes to impose by force technologies that people would not wish to use. Politically, most transhumanists are either libertarians or left-progressives; both persuasions value personal choice and lifestyle freedom quite highly. In a transhumanist world, people will continue to have the ability to live as they please, though many of them would be drawn to the new technologies because of the improvements to quality of life, productivity, and available time that these technologies would bring. Simply protecting individual rights and free speech while letting consumer preferences motivate decisions by producers would produce an outcome that respects everybody.
GLV: Similarly, what if someone is unable to afford certain technologies? How can they be assured they will still have equitable access to everything they desire about the way their lives are currently?
Stolyarov: Technologies tend to follow a rapid evolution from being initially expensive and unreliable to being cheap and ubiquitous. Computers, cell phones, and the Internet followed this trajectory, for instance. There has not been a single technology in recent history that has remained an exclusive preserve of the wealthy, even though many technologies started out that way. Ray Kurzweil writes in his FAQ regarding his book The Singularity is Near(http://www.singularity.com/qanda.html), “Technologies start out affordable only by the wealthy, but at this stage, they actually don’t work very well. At the next stage, they’re merely expensive, and work a bit better. Then they work quite well and are inexpensive. Ultimately, they’re almost free. Cell phones are now at the inexpensive stage. There are countries in Asia where most people were pushing a plow fifteen years ago, yet now have thriving information economies and most people have a cell phone. This progression from early adoption of unaffordable technologies that don’t work well to late adoption of refined technologies that are very inexpensive is currently a decade-long process. But that too will accelerate. Ten years from now, this will be a five year progression, and twenty years from now it will be only a two- to three-year lag.”
GLV: Ray Kurzweil says he wants everyone to exist in virtual environments in the future. What if someone doesn’t want to exist in a virtual environment? Do we have assurances that our real environments won’t be taken away to somehow make room for virtual ones?
Stolyarov: I think it is impractical to wholly exist in a virtual environment, because any virtual environment has a physical underpinning, and it would be imprudent to completely distance oneself from that underpinning (as some sort of body – biological or artificial – would still have to exist in the physical world). Virtual environments would be places one could visit and stay for a while, but not too long, and not without breaks. Data storage is becoming exponentially cheaper and more compact by the year. By the time Ray Kurzweil’s vision could be realized, vast virtual environments would be hosted in less than the area of a room. No significant amounts of physical space would be compromised in any way. Indeed, if more people spend more time in virtual environments, then physical environments would become less crowded and more convenient to navigate for those who choose to primarily spend time in them.
GLV: What about population control? Resources?
Stolyarov: There is actually no shortage of resources even today to give everyone a decent standard of living; the problems lie in flawed political and economic systems that prevent resources from being effectively utilized and from reaching everyone. Overpopulation is not and will not be a significant problem. Max More provides an excellent, thorough discussion of this in his 2005 essay, “Superlongevity Without Overpopulation” -https://www.fightaging.org/archives/2005/02/superlongevity-without-overpopulation-1.php. He also notes S. J. Olshansky’s finding that even “if we achieved immortality today, the growth rate of the population would be less than what we observed during the post World War II baby boom” – so humans have already been in a similar situation and have come out more prosperous than ever before. As regards resources more generally, Julian Simon made excellent arguments in his free online book The Ultimate Resource II (1998) – http://www.juliansimon.com/writings/Ultimate_Resource/ – that resources are not fixed; they are a function of human creativity and technological ability. Yesterday’s pollutants and waste products can be today’s useful resources, and we will learn how to harness even more materials in the coming decades in order to enable us to continue improving standards of living.
GLV: Kurzweil says he is working on technology to bring people back from the dead. What if people do not wish to be brought back from the dead? What if they would not have given permission to have an avatar created of themself? Some people think that Kurzweil seems to have no concept of the word “ethics.” What are your thoughts on this?
People who are “brought back” from the dead or avatars of people who have died would not have the continuity of the experience of the dead person. They would not have the same “I-ness” as that of the person who died (though they may have a new “I-ness” and therefore be autonomous individuals in their own right). Therefore, the process of creating a person that resembles somebody who has died can be best thought as creating a new individual who has similar memories, personality traits, etc. This person may have his/her own ideas about whether he/she wants to live, irrespective of any wishes of the person who died previously, who would not be the same person. For more details on this, I recommend my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self” – http://rationalargumentator.com/issue256/Iliveforever.html. In particular, I recommend the section titled “Reanimation After Full Death”.
GLV: Do you know anything about the transhumanists who have rented space in floating facilities at sea so they can work on experiments outside the jurisdiction of any regulatory body?
Stolyarov: I am not aware of any experiments by transhumanists on floating facilities at present. To my knowledge, the implementation of seasteading (http://www.seasteading.org/) – the creation of such modular floating facilities – is still years away. However, I am entirely in support of the idea that such experiments ought to take place among fully willing participants. For instance, if a terminally ill patient would like to try saving his or her life through an experimental therapy, I think it is immoral for any government authority to stand in the way of what could be that person’s last chance at life.
GLV: Many people find Google Glass to be repulsive and would be deeply offended at anyone who was wearing it while speaking to them. Similarly, many feel deeply offended when people look at their phones while speaking with them. Many find this to be rude and an abomination. How will those people be assured that they will still be able to have organic, real, in-depth human interaction with others without machine intrusion should the Singularity come to pass?
Stolyarov: Google Glass does not need to be turned on or actively used when worn. As with any technology, norms of behavior around it would develop to make sure that meaningful interaction is possible in a variety of contexts. The solution is never to ban or restrict the use of the technology, but rather to develop and disseminate an understanding of acceptable etiquette that most people could agree on. I remember a time in the early 2000s when cell-phone etiquette was still not well-developed, and many people would interrupt their face-to-face conversations to take unexpected calls. I have observed that this is largely not done anymore; most people keep their phones in a silent or vibrate-only mode and will often wait to respond to a non-emergency call until a face-to-face discussion has concluded. I expect that similar etiquette will develop around Google Glass. There may be a few years of growing pains while the technology is new, but this is a very small price to pay for progress.
Interview by: Rebecca Savastio
Online interview with Gennady Stolyarov