Just News – Truthstream Media – Obsolete — Full Documentary Official (2016)

 

Truthstream Media – Obsolete — Full Documentary Official (2016)
OBSOLETE description:
The Future Doesn’t Need Us… Or So We’ve Been Told.
With the rise of technology and the real-time pressures of an online, global economy, humans will have to be very clever – and very careful – not to be left behind by the future.
From the perspective of those in charge, human labor is losing its value, and people are becoming a liability.
This documentary reveals the real motivation behind the secretive effort to reduce the population and bring resource use into strict, centralized control.
Could it be that the biggest threat we face isn’t just automation and robots destroying jobs, but the larger sense that humans could become obsolete altogether?

William Nelson Joy (born November 8, 1954) is an American computer scientist. He co-founded Sun Microsystems in 1982 along with Vinod Khosla, Scott McNealy and Andreas von Bechtolsheim, and served as chief scientist at the company until 2003. He played an integral role in the early development of BSD UNIX while a graduate student at Berkeley, and he is the original author of the vi text editor. He also wrote the 2000 essay “Why the Future Doesn’t Need Us”, in which he expressed deep concerns over the development of modern technologies.

Wired – WHY THE FUTURE DOESN’T NEED US
FROM THE MOMENT I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.
Ray and I were both speakers at George Gilder’s Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.
I had missed Ray’s talk and the subsequent panel that Ray and John had been on, and they now picked right up where they’d left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn’t happen, because the robots couldn’t be conscious.
While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray’s proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

Science News – The Post-Human World

5 Topics That Are “Forbidden” to Science
A yearly conference organized by the MIT Media Lab tackles “forbidden research”, the science that is constrained by ethical, cultural and institutional restrictions. The purpose of the conference is to give scientists a forum to consider these ideas and questions and to discuss the viability and necessity of studying topics like the rights of AI and machines, genetic engineering, climate change and others.
Edward Snowden, who appeared remotely at the 2016 conference, summarized its “theme” as “law is no substitute for conscience.“ Pointing to his work against pervasive digital surveillance, he reiterated that “the legality of a thing is quite distinct from the morality of it.”
The major “forbidden” topics discussed at the conference were, unsurprisingly, wrought with political implications:

1. Messing with Nature
2. Engineering the Climate
3. Robot Ethics
4. Secure Communication Technology
5. Universal Access to Science

Could Genetic Engineering Lead to Unholy Mixing of Man and Beast?
By Adam Eliyahu Berkowitz February 21, 2017
“Consider the work of God; for who can make that straight, which He hath made crooked?” Ecclesiastes 7:13 (The Israel Bible™)
New technology enabling scientists to manipulate genes, mixing human genes and organs with those of animals, is a disturbing trend in science which one rabbi believes mirrors the sin that led to global destruction in the generation of Noah.
Last week, the National Academies of Sciences and Medicine released a new report including recommendations to ensure genetic research done in the United States is performed responsibly and ethically. In essence, this report gave the greenlight to gene research, even though funding for such research is currently banned by the government because of the ethical dilemmas it raises.
The new technology bears with it practical risk. Genetic research can take two forms: gene editing to cure or prevent disease, and gene editing to enhance humans. Genetics is uncharted territory and scientists could accidentally introduce a dangerous mutation that will harm future generations, or, in an attempt to create vaccines, inadvertently create a superior form of the disease which could threaten mankind.
Rabbi Moshe Avraham Halperin of the Machon Mada’i Technology Al Pi Halacha (the Institute for Science and Technology According to Jewish Law) stated in response to the report that there are clear Torah guidelines for this new technology. Rabbi Halperin referred to the Biblical law concerning mixing of species.

The Post-Human World
A conversation about the end of work, individualism, and the human species with the historian Yuval Harari
Athit Perawongmetha / Reuters
DEREK THOMPSON  FEB 20, 2017
Famine, plague, and war. These have been the three scourges of human history. But today, people in most countries are more likely to die from eating too much rather than too little, more likely to die of old age than a great plague, and more likely to commit suicide than to die in war.
With famine, plague, and war in their twilight—at least, for now—mankind will turn its focus to achieving immortality and permanent happiness, according to Yuval Harari’s new book Homo Deus. In other words, to turning ourselves into gods.
Harari’s previous work, Sapiens, was a swashbuckling history of the human species. His new book is another mind-altering adventure, blending philosophy, history, psychology, and futurism. We spoke recently about its most audacious predictions. This conversation has been edited for concision and clarity.

Wired – Humans 2.0: these geneticists want to create an artificial genome by synthesising our DNA
Scientists intend to have fully synthesised the genome in a living cell – which would make the material functional – within ten years, at a projected cost of $1 billion
By EMMA BRYCE
Sunday 26 February 2017
In July 2015, 100 geneticists met at the New York Genome Center to discuss yeast. At 12 million base pairs long, it’s the largest genome scientists have tried to produce synthetically.
Andrew Hessel, a researcher with the Bio/Nano research group at software company Autodesk, was invited to speak at the event. The audience asked him which organism should be synthesised next. “I said, ‘Look around the room. You’ve got hardly anyone here and you’re doing the most sophisticated genetic engineering in the world,” Hessel recalls. “Why don’t you take a page out of history and set the bar high? Do the human genome.”
This triggered a panel discussion that stuck in Hessel’s mind for weeks. Soon afterwards, he contacted George Church, a prominent geneticist at Harvard University, to gauge his interest in launching what would effectively be the Human Genome Project 2.0. “To me it was obvious,” Hessel recalls. “If we could read and analyse a human genome, we should also write one.”
A year later, his provocation had become reality. In May 2016, scientists, lawyers and government representatives converged at Harvard to discuss the Human Genome Project-Write (HGP-Write), a plan to build whole genomes out of chemically synthesised DNA. It will build on the $3 billion (£2.3bn) Human Genome Project, which mapped each letter in the human genome.
Leading the Harvard event was Church, whose lab is synthesising the 4.5-million-base-pair E. coli genome, and Jef Boeke 1, the NYU School of Medicine geneticist behind the yeast synthesis project. “I think we realised the two of us were getting good enough at those two genomes that we should be discussing larger ones,” says Church.
“If we can achieve this, it should be possible to write large genomes in hours”

Wired Space Photo of the Day – ISON

Wired Space Photo of the Day – ISON

Oct 8, 2013

Comet ISON Right Now

Here is what comet ISON looked like this morning through the Schulman 0.8 Telescope atop Mount Lemmon at the UA SkyCenter. I am certain more images of this will be coming out shortly as it increases in brightness during its dive towards the Sun. Here is hoping it survives that rendezvous and emerges as something spectacular on the other side!

Image: Copyright Adam Block/Mount Lemmon SkyCenter/University of Arizona [high-resolution]

Caption: Adam Block

Why the future doesn’t need us.

Why the future doesn’t need us.

Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species.

By Bill Joy

From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.

Ray and I were both speakers at George Gilder’s Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.

I had missed Ray’s talk and the subsequent panel that Ray and John had been on, and they now picked right up where they’d left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn’t happen, because the robots couldn’t be conscious.

While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray’s proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

It’s easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming bookThe Age of Spiritual Machines, which outlined a utopia he foresaw – one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.

I found myself most troubled by a passage detailing adystopian scenario:

THE NEW LUDDITE CHALLENGE

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite – just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.1


1 The passage Kurzweil quotes is from Kaczynski’s Unabomber Manifesto, which was published jointly, under duress, byThe New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:

“It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect’s brother read it, and it rang a bell.

“I would have told them not to publish. I’m glad they didn’t ask me. I guess.”

(Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)


Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification. His work on theJini pervasive computing technology was featured inWired 6.08.

In the book, you don’t discover until you turn the page that the author of this passage is Theodore Kaczynski – the Unabomber. I am no apologist for Kaczynski. His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber’s next target.

Kaczynski’s actions were murderous and, in my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.

Kaczynski’s dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy’s law – “Anything that can go wrong, will.” (Actually, this is Finagle’s law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.2

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.

I started showing friends the Kaczynski quote fromThe Age of Spiritual Machines; I would hand them Kurzweil’s book, let them read the quote, and then watch their reaction as they discovered who had written it. At around the same time, I found Hans Moravec’s bookRobot: Mere Machine to Transcendent Mind. Moravec is one of the leaders in robotics research, and was a founder of the world’s largest robotics research program, at Carnegie Mellon University.Robot gave me more material to try out on my friends – material surprisingly supportive of Kaczynski’s argument. For example:

The Short Run (Early 2000s)

Biological species almost never survive encounters with superior competitors. Ten million years ago, South and North America were separated by a sunken Panama isthmus. South America, like Australia today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers. When the isthmus connecting North and South America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials.

In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials (and as humans have affected countless species). Robotic industries would compete vigorously among themselves for matter, energy, and space, incidentally driving their price beyond human reach. Unable to afford the necessities of life, biological humans would be squeezed out of existence.

There is probably some breathing room, because we do not live in a completely free marketplace. Government coerces nonmarket behavior, especially by collecting taxes. Judiciously applied, governmental coercion could support human populations in high style on the fruits of robot labor, perhaps for a long while.

A textbook dystopia – and Moravec is just getting wound up. He goes on to discuss how our main job in the 21st century will be “ensuring continued cooperation from the robot industries” by passing laws decreeing that they be “nice,”3 and to describe how seriously dangerous a human can be “once transformed into an unbounded superintelligent robot.” Moravec’s view is that the robots will eventually succeed us – that humans clearly face extinction.

I decided it was time to talk to my friend Danny Hillis. Danny became famous as the cofounder of Thinking Machines Corporation, which built a very powerful parallel supercomputer. Despite my current job title of Chief Scientist at Sun Microsystems, I am more a computer architect than a scientist, and I respect Danny’s knowledge of the information and physical sciences more than that of any other single person I know. Danny is also a highly regarded futurist who thinks long-term – four years ago he started the Long Now Foundation, which is building a clock designed to last 10,000 years, in an attempt to draw attention to the pitifully short attention span of our society. (See “Test of Time,”Wired 8.03, page 78.)

So I flew to Los Angeles for the express purpose of having dinner with Danny and his wife, Pati. I went through my now-familiar routine, trotting out the ideas and passages that I found so disturbing. Danny’s answer – directed specifically at Kurzweil’s scenario of humans merging with robots – came swiftly, and quite surprised me. He said, simply, that the changes would come gradually, and that we would get used to them.

But I guess I wasn’t totally surprised. I had seen a quote from Danny in Kurzweil’s book in which he said, “I’m as fond of my body as anyone, but if I can be 200 with a body of silicon, I’ll take it.” It seemed that he was at peace with this process and its attendant risks, while I was not.

While talking and thinking about Kurzweil, Kaczynski, and Moravec, I suddenly remembered a novel I had read almost 20 years ago –The White Plague, by Frank Herbert – in which a molecular biologist is driven insane by the senseless murder of his family. To seek revenge he constructs and disseminates a new and highly contagious plague that kills widely but selectively. (We’re lucky Kaczynski was a mathematician, not a molecular biologist.) I was also reminded of the Borg ofStar Trek, a hive of partly biological, partly robotic creatures with a strong destructive streak. Borg-like disasters are a staple of science fiction, so why hadn’t I been more concerned about such robotic dystopias earlier? Why weren’t other people more concerned about these nightmarish scenarios?


2 Garrett, Laurie.The Coming Plague: Newly Emerging Diseases in a World Out of Balance. Penguin, 1994: 47-52, 414, 419, 452.

3 Isaac Asimov described what became the most famous view of ethical rules for robot behavior in his bookI, Robot in 1950, in his Three Laws of Robotics: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.