There quickly rise once it reached Human level intelligence

There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying ‘philosophy is dead – all we need now is physics’ or words to that effect. I challenge any of them to read this book and still say that philosophy is pointless.I opened this book with a very favorable a priori. Its author, Nick Bostrom is a Swedish philosopher who has benefited in recent years from some media lighting for his work on new technologies and transhumanism and futuristic perspectives. the book is a work that has influenced a number of personalities since its release in 2014, including Elon Musk and Bill Gates. From there, it seems to be an indispensable reading for anyone interested in the development of artificial intelligence.It’s worth pointing out immediately that this isn’t really a popular science book. I’d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or Artificial Intelligence students, reading more like a textbook than anything else, particularly in its dogged detail .In my opinion, the most interesting thought in the book is when the author describes  the possible outcomes that might be from developing Artificial Intelligence.Stephen Hawking and Bill Gates have also recently raised the alarm about Artificial Intelligence. If a superhuman artificial intelligence were created it would be the biggest event in human history and it could very well be the last.  We are only familiar with human intelligence and it may be a small sample from the possibilities of intelligence to be had.  Bostrom makes the case that the most likely path to superintelligence would most likely be a hard takeoff as the Artificial Intelligence would quickly rise once it reached Human level intelligence and quickly reorganize itself to a very superior form of intelligent mind.  It would quickly gain powers and abilities far beyond humans and it would be more alien and unfathomable than anything we have ever seen. If it has goals that don’t match up with the human project so much for the human race. With great detail, Bostrom lays out where Artificial Intelligence could go seriously wrong for us. Disasters in the abstract may make us yawn but Bostrom gives the details of what the catastrophe might look like. The Hellmouth is much scarier when the picture becomes more detailed.The sci-fi scenario of intelligent machines controlling the world could become a truth very soon after their capabilities beat the human brain, Bostrom argues. Machines could augment their own capabilities far faster than human computer scientists.”Machines have a number of fundamental advantages, which will give them devastating  superiority,” he writes. “Biological humans, even if enhanced, will be outclassed.” He outlines different  manners for Artificial Intelligence to escape the physical bonds of the hardware in which it created. For instance, it might use its hacking superpower to take command of robotic manipulators and automated labs; or deploy its powers of social manipulation to persuade human collaborators to work for it. There might be a covert preparation stage in which microscopic entities capable of replicating themselves by nanotechnology or biotechnology are deployed worldwide at an extremely low concentration. Then at a pre-set time nanofactories producing nerve gas or target-seeking mosquito-like robots might spring forth.What would the world be like after the takeover? Bostrom thinks that It would contain far more complex and clever structures than anything we can imagine today – but would be deficient of  any type of being that is conscious or whose welfare has moral significance. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” as Bostrom puts it. “A Disneyland without children.”The book started out somewhat promisingly by not taking a stand on whether strong Artificial Intelligence was imminent or not, but that was the height of what I read. While reading the book i didn’t stop asking myself  how the author knows that’s true or even thinking that what is he saying was completely wrong and anyone with a modicum of familiarity with the field he’s talking about would know that. Which bring us to the second question on if the achieving this type of Superintelligence is realistic.In most of the cases, Bostroms argumentation has almost the same flow: he start by presenting  his thought without any technical background, traces out how it might be true (or, in a few “slam-dunk” sections, *several* ways it might be true), and then moves on to  the next section, where he takes all of the ideas introduced in the previous sections as proved. In fact Bostrom uses the previous thought as cast-iron truth and used to  justify new claims avoiding all  justification or explanation of the feasibility of the old idea. It’s just an opaque formula. For example, if you think you know what the Bayesian inference is in statistics, reading the first inset dedicated to it will plunge you into total confusion. “As shown in figure ..” or “As shown in the table” are expressions that often come up in the text. Except that the figures and tables show nothing, rely on no credible or argumentative data and are often only hand-drawn curves along axes without units. The author poses in the same way many equations. These are never justified, tested, or criticized. Part of the argument of the book is based on evidence that is not. It is only speculative, deals with an abstract AI and is not based on any fact.This is not to say that the outcome Bostrom fears is impossible. Even though I think many of the specific things he thinks are plausible are actually much less so than he asserts, I do think a kind of very powerful “unfriendly” Artificial Intelligence is a possibility that should be considered by those in a position to really understand the problem and take action against it if it turns out to be a real one. We have no reason to suspect that the particular kinds of issues he proposes are the ones that will matter, that the particular characteristics he ascribes to future Artificial Intelligence are ones that will be salient, indeed that this problem is likely enough, near enough, and tractable enough to be worth spending significant resources on at all at the moment! Nothing Bostrom is saying compellingly privileges his particular predictions over many many possible others, even if you take as a given that extraordinarily powerful Artificial Intelligence is possible and its behavior hard to predict. I continually got the sense (sometimes explicitly echoed by Bostrom himself!) that you could substitute in huge worlds of incompatible particulars for the ones he proposed and still make the same claims.His conclusion is that researchers in the field of AI are just children playing with a bomb ready to detonate. He calls the children to put their toy and call the nearest adult. We do not doubt for a moment that in his mind the adult is the philosopher.Bostrom explores a great many questions in this book but, oddly enough, it seems never to occur to him to think about the possible moral responsibility we humans might have towards an intelligent machine, not just a figment of our imagination but a being that we will someday create and could at least be compared to us. Charity begins at home, I suppose.

x

Hi!
I'm Mack!

Would you like to get a custom essay? How about receiving a customized one?

Check it out