Post by plutronus on Dec 28, 2018 0:57:24 GMT -6
Robots Will Kill You....
Ray Kurzweil, Google's director of engineering and another noted artificial intelligence optimist, told said he expects AI systems to have enough emotional intelligence to have a romantic relationship with a human by about 2029.
Geofry Hinton, Google AI research engineer is working on translating 'thoughts' into code, something he calls "thought vectors. "
Eric Mark of CNET says, "it might just put me out of a job, in which case I'm on board with Elon Musk, Stephen Hawking, Bill Gates and the other smart people preaching caution in our approach to artificial intelligence."
Europe Wants A Mandatory Kill Switch on Robots, Just In Case ...
"To combat the robot revolution, the European Parliament’s legal affairs committee has proposed that robots be equipped with emergency “kill switches” to prevent them from causing excessive damage:
The proposal calls for a new charter on robotics that would give engineers guidance on how to design ethical and safe machines. For example, designers should include “kill switches” so that robots can be turned off in emergencies. They must also make sure that robots can be reprogrammed if their software doesn’t work as designed. The proposal states that designers, producers and operators of robots should generally be governed by the “laws of robotics” described by science fiction writer Isaac Asimov.
The proposal also says that robots should always be identifiable as mechanical creations. That will help prevent humans from developing emotional attachments. “You always have to tell people that robot is not a human and a robot will never be a human,” said Delvaux. “You must never think that a robot is a human and that he loves you.” The report cites the example of care robots, saying that people who are physically dependent on them could develop emotional attachments."
The proposal calls for a new charter on robotics that would give engineers guidance on how to design ethical and safe machines. For example, designers should include “kill switches” so that robots can be turned off in emergencies. They must also make sure that robots can be reprogrammed if their software doesn’t work as designed. The proposal states that designers, producers and operators of robots should generally be governed by the “laws of robotics” described by science fiction writer Isaac Asimov.
The proposal also says that robots should always be identifiable as mechanical creations. That will help prevent humans from developing emotional attachments. “You always have to tell people that robot is not a human and a robot will never be a human,” said Delvaux. “You must never think that a robot is a human and that he loves you.” The report cites the example of care robots, saying that people who are physically dependent on them could develop emotional attachments."
"Let’s talk about AI and which jobs are in danger first. Economists generally break employment into cognitive versus physical jobs and routine versus non-routine jobs. This gives us four basic categories of work:
- Routine physical: digging ditches, driving trucks
- Routine cognitive: accounts-payable clerk, telephone sales
- Non-routine physical: short-order cook, home health aide
- Non-routine cognitive: teacher, doctor, CEO
Machine-learning researchers estimate that speech transcribers, translators, commercial drivers, retail sales, and similar jobs could be fully automated during the 2020s. Within a decade after that, all routine jobs could be gone. All the low paying Democrat jobs will be AI, and all of them will be on government subsidies.
Non-routine jobs will be next:
- surgeons,
- novelists,
- construction workers,
- police officers,
- etc.
These jobs could all be fully automated during the 2040s.
By 2060, AI will be capable of performing any task currently done by humans. Normal jobs are what almost all of us have.
An Oxford-Yale survey is correct, we’ll face an employment apocalypse: the disappearance of routine work of all kinds by the mid-2030s.
That represents nearly half the US labor force. The consulting firm Price-Waterhouse-Coopers recently released a study saying much the same. It predicts that 38 percent of all jobs in the United States are “at high risk of automation” by the early 2030s, most of them in routine occupations. In the even nearer term, the World Economic Forum predicts that the rich world will lose 5 million jobs to robots by 2020, while a group of AI experts, writing in "Scientific American", estimates that 40 percent of the 500 biggest companies will vanish within a decade.
Kai-Fu Lee, a former Microsoft and Google executive, now a prominent investor in Chinese AI startups, thinks artificial intelligence “will probably replace 50 percent of human jobs.” Within 10 years. Ten years! Maybe it’s time to really start thinking hard about AI."
Kai-Fu Lee, a former Microsoft and Google executive, now a prominent investor in Chinese AI startups, thinks artificial intelligence “will probably replace 50 percent of human jobs.” Within 10 years. Ten years! Maybe it’s time to really start thinking hard about AI."
Fortune’s technology newsletter
"Russian President Vladimir Putin has declared that the control of artificial intelligence will be crucial to global power.
In a “science lesson” to start off the Russian school year, President Putin reportedly said that artificial intelligence is “the future, not only for Russia, but for all humankind.”
“It comes with colossal opportunities, but also threats that are difficult to predict,” Putin said, as quoted by the state-funded media organization RT. “Whoever becomes the leader in this sphere will become the ruler of the world.”
While some more excitable outlets have reported this as Putin saying Russia will use AI to take over the world, that’s not quite what he said. Rather, according to the Associated Press‘s English-language translation, Putin argued that “it would be strongly undesirable if someone wins a monopolist position.”
“If we become leaders in this area, we will share this know-how with [the] entire world, the same way we share our nuclear technologies today,” Putin said, per RT Tv.
Putin’s warning about AI monopolization was in line with the fears of academics such as Nick Bostrom and certain technologists such as Elon Musk, who worry that the transition between today’s proto-AI technologies and a true AI superintelligence may take place so quickly, with the intelligence’s subsequent development being so rapid, that competing research efforts will be left in the dust.
This would put an inordinate amount of power in the hands of whoever developed the leading AI, or — in the “Terminator” scenario—in the hands of the AI itself.
According to the AP report, Putin also predicted that countries would fight future wars with drones, with the victor being determined by drone supremacy."
In a “science lesson” to start off the Russian school year, President Putin reportedly said that artificial intelligence is “the future, not only for Russia, but for all humankind.”
“It comes with colossal opportunities, but also threats that are difficult to predict,” Putin said, as quoted by the state-funded media organization RT. “Whoever becomes the leader in this sphere will become the ruler of the world.”
While some more excitable outlets have reported this as Putin saying Russia will use AI to take over the world, that’s not quite what he said. Rather, according to the Associated Press‘s English-language translation, Putin argued that “it would be strongly undesirable if someone wins a monopolist position.”
“If we become leaders in this area, we will share this know-how with [the] entire world, the same way we share our nuclear technologies today,” Putin said, per RT Tv.
Putin’s warning about AI monopolization was in line with the fears of academics such as Nick Bostrom and certain technologists such as Elon Musk, who worry that the transition between today’s proto-AI technologies and a true AI superintelligence may take place so quickly, with the intelligence’s subsequent development being so rapid, that competing research efforts will be left in the dust.
This would put an inordinate amount of power in the hands of whoever developed the leading AI, or — in the “Terminator” scenario—in the hands of the AI itself.
According to the AP report, Putin also predicted that countries would fight future wars with drones, with the victor being determined by drone supremacy."
"Should Robots Be Able to Decide to Kill You On Their Own?
U.N. report released in May 2013 called for a global moratorium on developing highly sophisticated [AI] robots that can select and kill targets without a human being directly issuing a command. These machines, known as "Lethal Autonomous Robots" or LARs, may sound like science fiction - but experts increasingly believe some version of them could be created in the near future. The report, released by Professor Chrisof Heyns, U.N. Special Rapporteur on extrajudicial, summary or arbitrary executions, also calls for the creation of "a high level panel on LARs to articulate a policy for the international community on the issue."
In a recent paper, law professors Kenneth Anderson and Matthew Waxman suggest that robots would be free from "human-soldier failings that are so often exacerbated by fear, panic, vengeance, or other emotions - not to mention the limits of human senses and cognition."
Still, many concerns remain. These systems, if used, would be required to conform to international law. If LARs couldn't follow rules of distinction and proportionality - that is, determine correct targets and minimize civilian casualties, among other requirements - then the country or group using them would be committing war crimes. And even if these robots were programmed to follow the law, it is entirely possible that they could remain undesirable for a host of other reasons. They could potentially lower the threshold for entering into a conflict. Their creation could spark an arms race that - because of their advantages - would become a feedback loop. The U.N. report describes the fear that "the increased precision and ability to strike anywhere in the world, even where no communication lines exist, suggests that LARs will be very attractive to those wishing to perform targeted killing."
The report also warns that "on the domestic front, LARs could be used by States to suppress domestic enemies and to terrorize the population at large." Beyond that, the report warns LARs could exacerbate the problems associated with the position that the entire world is a battlefield, one that - though the report doesn't say so explicitly - the United States has held since 9/11. "If current U.S. drone strike practices and policies are any example, unless reforms are introduced into domestic and international legal systems, the development and use of autonomous weapons is likely to lack the necessary transparency and accountability," says Sarah Knuckey, a human rights lawyer at New York University's law school who hosted an expert consultation for the U.N. report."
Still, many concerns remain. These systems, if used, would be required to conform to international law. If LARs couldn't follow rules of distinction and proportionality - that is, determine correct targets and minimize civilian casualties, among other requirements - then the country or group using them would be committing war crimes. And even if these robots were programmed to follow the law, it is entirely possible that they could remain undesirable for a host of other reasons. They could potentially lower the threshold for entering into a conflict. Their creation could spark an arms race that - because of their advantages - would become a feedback loop. The U.N. report describes the fear that "the increased precision and ability to strike anywhere in the world, even where no communication lines exist, suggests that LARs will be very attractive to those wishing to perform targeted killing."
The report also warns that "on the domestic front, LARs could be used by States to suppress domestic enemies and to terrorize the population at large." Beyond that, the report warns LARs could exacerbate the problems associated with the position that the entire world is a battlefield, one that - though the report doesn't say so explicitly - the United States has held since 9/11. "If current U.S. drone strike practices and policies are any example, unless reforms are introduced into domestic and international legal systems, the development and use of autonomous weapons is likely to lack the necessary transparency and accountability," says Sarah Knuckey, a human rights lawyer at New York University's law school who hosted an expert consultation for the U.N. report."
More to come....