|
Post by auntym on Nov 12, 2015 13:54:09 GMT -6
Auntym is always looking for cool stuff. thanks sky...
|
|
|
Post by auntym on Jan 14, 2016 18:09:47 GMT -6
www.dailygalaxy.com/my_weblog/2016/01/the-future-of-intelligent-machine-communication-from-apples-siri-to-hondas-robot-asimo-uc-berkeley.htmlJanuary 12, 2016 The Future of Intelligent Machine Communication --From Apple's Siri to Honda's Robot Asimo (UC Berkeley)From Apple’s Siri to Honda’s robot Asimo, machines seem to be getting better and better at communicating with humans. But some neuroscientists caution that today’s computers will never truly understand what we’re saying because they do not take into account the context of a conversation the way people do. Specifically, say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues, machines don’t develop a shared understanding of the people, place and situation – often including a long social history – that is key to human communication. Without such common ground, a computer cannot help but be confused. “People tend to think of communication as an exchange of linguistic signs or gestures, forgetting that much of communication is about the social context, about who you are communicating with,” Stolk said. The word “bank,” for example, would be interpreted one way if you’re holding a credit card but a different way if you’re holding a fishing pole. Without context, making a “V” with two fingers could mean victory, the number two, or “these are the two fingers I broke.” “All these subtleties are quite crucial to understanding one another,” Stolk said, perhaps more so than the words and signals that computers and many neuroscientists focus on as the key to communication. “In fact, we can understand one another without language, without words and signs that already have a shared meaning.” Babies and parents, not to mention strangers lacking a common language, communicate effectively all the time, based solely on gestures and a shared context they build up over even a short time. As two people conversing rely more and more on previously shared concepts, the same area of their brains – the right superior temporal gyrus – becomes more active (blue is activity in communicator, orange is activity in interpreter). This suggests that this brain region is key to mutual understanding as people continually update their shared understanding of the context of the conversation to improve mutual understanding. CONTINUE READING: www.dailygalaxy.com/my_weblog/2016/01/the-future-of-intelligent-machine-communication-from-apples-siri-to-hondas-robot-asimo-uc-berkeley.html
|
|
|
Post by auntym on Aug 27, 2017 0:46:40 GMT -6
Massive robot dance - Guinness World RecordsGuinness World Records Published on Aug 17, 2017 The robots were Dobi models who along with being programmed to dance can also sing, box, play football and execute kung fu moves. The robot display broke the previous record of 1,007, achieved by Ever Win Company & Ltd. in 2017. Read more: bit.ly/GWR-RoboDanceThe most robots dancing simultaneously is 1,069 and was achieved by WL Intelligent Technology Co, Ltd in Guangzhou, Guangdong, China. Welcome to the official Guinness World Records YouTube channel! If you're looking for videos featuring the world's tallest, shortest, fastest, longest, oldest and most incredible things on the planet, you're in the right place.
|
|
|
Post by jcurio on Aug 28, 2017 12:43:36 GMT -6
So why didn't they show all those robots playing football together?
|
|
|
Post by auntym on Mar 18, 2018 15:01:46 GMT -6
mysteriousuniverse.org/2018/03/computers-possessing-humans-and-our-perception-of-ai/ Computers Possessing Humans and Our Perception of AIby Brent Swancer / mysteriousuniverse.org/author/brentswancer/ March 17, 2018 We live in an age where reliance on computers, machines, and artificial intelligence is becoming stronger and more commonplace, inescapable to the point that society would cease to function as we know it without these things. As we progress on into the future there is every indication that these artificial intelligences will continue to evolve to the point that they equal us and perhaps even surpass us. Far from strictly the realm of science fiction, this is almost a certainty at the rate we are progressing, and it is at the stage now where we are learning that we are perhaps unprepared for what we will encounter in this hazy, unexplored domain lying before us. At least one group of researchers has realized that we are approaching a time when we will be faced with advanced AI in a realistic human form, and they have gone about testing how we will deal with this through a bizarre series of experiments in which computers basically possess humans. When we talk to someone, how does their appearance and demeanor affect how much we listen to what they actually say? This was one of the questions that drove a group of British researchers at the London School of Economics to start a truly bizarre experiment to create what they call an “echoborg.” It sounds quite sinister, and perhaps the future implications of it might be, but the main premise involves simply using a chatbot with relatively advanced AI in one room, and a living breathing volunteer in another, who wears an earpiece attached to the bot and simply repeats everything that comes through, essentially becoming the computer’s mouthpiece. In a sense, the human becomes “possessed” by the AI, and it is every bit as surreal as you might imagine. Researchers wanted to know if the appearance of an AI would affect how it was perceived, and got the idea for the experiment from the 19th Century play, Cyrano de Bergerac, in which the very smart and witty, but physically unattractive De Bergerac uses his handsome but not very bright rival Christian as a mouthpiece, feeding him what to say and do in order to woo the beautiful object of his desire, Roxanne. The psychologist Stanley Milgram picked up on this idea and tried to explore it through experiments using what he called “cyranoids,” which were people who were told and dictated what to say or do through earpieces by a person in another room. In this manner the cyranoid then talks to someone who is unaware that the one really speaking is a completely different person. The idea was to see whether the physical perception of the person someone was talking to could shape the way they interacted with them, and it resoundingly did. Interestingly, it was found that whole conversations could go by without the person ever realizing that the one in front of them was not who they were really talking to. It is indeed rather a creepy concept, but Milgram took it even further by trying ever more extreme combinations of people, such as an old man speaking through a child, a man speaking through a woman, and vice-versa. In every case it was found that people were actually shockingly poor at determining if an imposter was speaking through the person they were talking to, and just went along with whatever they said no matter how much of a disconnect there was between what they were hearing and who was saying it. Even when people in the experiment were made aware of the fact that the speaker was not who they appeared to be, the illusion nevertheless still had an effect on their perceptions of what was being said. In short, the physical form of the speaker had a great power over the conversation, and there was the sobering realization that appearance seemed to be more of a factor in people’s perceptions of the speaker than the content of what they actually said. It was these experiments that led to the work on echoborgs, which take it all a step further still by making the third party not a real person at all, but rather an artificial intelligence. They wanted to know what would happen if someone were to talk to a computer speaking by way of a human go-between, basically a human body with a robot mind for all intents and purposes. Would it change the way they interacted with or accepted the AI? Would anyone even realize they were talking to a computer at all? A researcher on the project by the name of Kevin Corti explained of the unusual project: Most of the time we encounter AI today, it’s in a very mechanical interface. This was a way of doing it so that people actually believe they are encountering another person. We are very long way away from a perfectly human interface, but we can leapfrog that and put the machine mind in a human body, to see what happens.And what happens is fairly weird indeed, with the experiments using the echoborgs yielding some surprising and intriguing results. In cases where the subject speaking to the echoborg is unaware that they are talking to a computer, it seems that they are poor at catching on to the charade, it they even figure it out at all. They continue talking despite any quirks or inconsistencies in the contents or flow of communication, and even after whole conversations don’t realize that anything is amiss or that they are basically chatting to a chatbot. Corti said of this remarkable phenomenon: The vast majority of participants never picked up that they were speaking to the words of the computer program. They just assumed the person had some problem, where their social inhibitions had been diminished. Never for a minute did they think they were talking to a hybrid.On the other hand, when the subject is told that they may or may not be talking to an AI, similar to the Turing Test, they are much more wary and readily able to discern the computer from the real person. In this case there was a set of expectations to be met in the mind of the subject for being human, and they were less forgiving and more suspicious of odd sentence patterns or strange things said. This could have repercussions on how we deal with sophisticated AI in the future, when we are likely to have more and more human-looking androids. It seems that with a human we are able to be very forgiving with idiosyncrasies and oddities in conversation, but we are primed to be very critical and even dismissive of what an AI says. When confronted with the fact that they had been speaking with an AI, most people were deeply uncomfortable, repulsed even, with the thought that they had been speaking with an AI through a human being deeply unsettling on an almost primal level. Corti said of this and its impact on our future dealings with robots, no matter how human they look, thus: We might have AI’s intelligent beyond belief but our knowledge of them being non-human might limit our desire to interact with them in a human way. As artificial intelligence starts to get close to pass for human, it’s not just uncanny, it’s awkward. There’s a kind of awkward valley.If this can happen with the relatively crude AI and chatbots we have now, imagine how it will be when artificial intelligence advances to the point where it can effectively and perfectly emulate a human being. How can we expect to approach, react to, and interact with these entities? Will we accept them and converse with them, or will we be instinctively repelled away, their existence just as alien as something from another world? These are things that we don’t have the answers for at the moment, and which Corti and his research group are trying to probe at the edges of. This seems to be a very real challenge facing us as we move on into a new world of ever more advanced robots and AI, and how people think of these intelligences they are interacting with will have a big part to play in how they are integrated into society as a whole. Another researcher on the project named Alex Gillespie has said of their experiments looking into this: I think it really gets us ahead of the curve. If you look at history, we think it’s powered by technology – but it’s not. It’s public opinion. They will decide if these things are on the streets as police officers, so a huge amount of work needs to be done to see how we relate to it.It is certainly a complex and challenging problem we are facing as we move on into new, uncharted realms of technology for which there is no historical precedent. As we approach an age when AI will be advanced to the point that it is indistinguishable from a human and the appearance of these machines approaches a close approximation of us, these are important things to think about. After all, how would you feel talking to a machine? How would you feel after being confronted with a human serving as a mouthpiece for one, as in the echoborg experiments? Would you be filled with wonder and awe, or a sense of unease or disgust? It seems that the time will come when we speak with AI on a regular basis, but how we will deal with that and react to it is a road foggier and less defined. The implications of this advance into ever more human machines presents a conundrum for us on just what being human is. We are in uncharted waters here, and we will likely not know the repercussions of our inexorable push into the frontiers of technology until we reach that point. CONTINUE READING: mysteriousuniverse.org/2018/03/computers-possessing-humans-and-our-perception-of-ai/
|
|
|
Post by jcurio on Mar 18, 2018 17:46:55 GMT -6
d, a man speaking through a woman, and vice-versa. In every case it was found that people were actually shockingly poor at determining if an imposter was speaking through the person they were talking to, and just went along with whatever they said no matter h Read more: theedgeofreality.proboards.com/thread/3516/robot?page=2#ixzz5A9BG9pC0***** Well, DUH. (had to stop reading this for now 🤮). —————- It is interesting. Keep up the good work, Aunty! 😉 Have you seen the new commercial where a young girl can open “things” with just a look?? (You can unlock your phone SOON with a look).
|
|
|
Post by jcurio on Mar 18, 2018 17:50:44 GMT -6
)not related.... just saw it.... new tech.....sorry!(
|
|
|
Post by jcurio on Mar 18, 2018 17:56:51 GMT -6
Kind of related..... actually.
Any one remember me talking about my phone (in 2007 to be exact) basically saying “hello” to me whenever I walked by it?? It buzzed. It was freaky.
If your iPhone now unlocks by recognizing your face..... can’t be “freaky” any more.
Here We GOoooooooo. Wheeeeeeeeee.
|
|
|
Post by jcurio on Mar 18, 2018 18:04:20 GMT -6
Now,? Think a little deeper. Deeper.
I WAS NOT having a “premonition”. This IS NOT “magical thinking”.
In 2007 I was engaged to/with THAT guy. 2007 to 2018..... me thinks JUST long enough for (some) military to already HAVE this app.... possible that I was a type of “g i knee pig”....
awwwwww. Just keep thinking I’m crazy..
|
|
|
Post by swamprat on Mar 19, 2018 17:05:12 GMT -6
Self-Driving Uber Car Kills Arizona Pedestrian By Daisuke Wakabayashi March 19, 2018
SAN FRANCISCO — A woman in Tempe, Ariz., has died after being hit by a self-driving car operated by Uber, in what appears to be the first known death of a pedestrian struck by an autonomous vehicle on a public road.
The Uber vehicle was in autonomous mode with a human safety driver at the wheel when it struck the woman, who was crossing the street outside of a crosswalk, the Tempe police said in a statement. The episode happened on Sunday around 10 p.m. The woman was not publicly identified.
Uber said it had suspended testing of its self-driving cars in Tempe, Pittsburgh, San Francisco and Toronto.
“Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident,” an Uber spokeswoman, Sarah Abboud, said in a statement.
The fatal crash will most likely raise questions about regulations for self-driving cars. Testing of self-driving cars is already underway for vehicles that have a human driver ready to take over if something goes wrong, but states are starting to allow companies to test cars without a person in the driver’s seat. This month, California said that, in April, it would start allowing companies to test autonomous vehicles without anyone behind the wheel.
Arizona already allows self-driving cars to operate without a driver behind the wheel. Since late last year, Waymo, the self-driving car unit from Google’s parent company Alphabet, has been using cars without a human in the driver’s seat to pick up and drop off passengers there. The state has largely taken an accommodating approach, promising that it would help keep the driverless car industry free from regulation. As a result, technology companies have flocked to Arizona to test their self-driving vehicles.
Autonomous cars are expected to ultimately be safer than human drivers, because they don’t get distracted and always observe traffic laws. However, researchers working on the technology have struggled with how to teach the autonomous systems to adjust for unpredictable human driving or behavior.
An Uber self-driving car was involved in another crash a year ago in Tempe. In that collision, one of Uber’s Volvo XC90 sport utility vehicles was hit when the driver of another car failed to yield, causing the Uber vehicle to roll over onto its side. The car was in self-driving mode with a safety driver behind the wheel, but police said the autonomous vehicle had not been at fault.
In 2016, a man driving his Tesla using Autopilot, the car company’s self-driving software, died on a state highway in Florida when it crashed into a tractor-trailer that was crossing the road in front of his car. Federal regulators later ruled there were no defects in the system to cause the accident.
mobile.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html?smid=tw-nytimes&smtyp=cur&referer=https://t.co/nPsR7XSphd?amp=1
|
|
|
Post by skywalker on Mar 19, 2018 18:33:05 GMT -6
I told you this would happen. Those self-driving thingies are menaces!
|
|
|
Post by jcurio on Mar 20, 2018 12:06:31 GMT -6
what appears to be the first known death of a pedestrian struck by an autonomous vehicle on a public road. Read more: theedgeofreality.proboards.com/thread/7707/artificial-intelligence-kills-first-human#ixzz5AJSxpJIZ****** Again. The wording here. ☹️ “public road”. We have no idea how many people have died during testing of ANY NEW TECH in “private”. This is huge. Someone always just thinks it’s a “glitch” that can be worked on. I realize, that even if this project is “shelved” for 10 years, it (self-driving cars) will come up again. Would it hurt to wait? This event has me in tears. Humans should know right and wrong instinctively (what would injure) about a fellow human. And yes, I realize that small children have to be taught to obey such “instincts”. Don’t try to tell me this about tech.
|
|
|
Post by auntym on Mar 20, 2018 13:42:47 GMT -6
venturebeat.com/2018/03/19/what-we-actually-have-to-fear-from-killer-robots/ What we actually have to fear from killer robotsby Sam Charrington, This Week In Machine Learning & AI / venturebeat.com/author/sam-charrington-this-week-in-machine-learning-ai/March 19, 2018 Image Credit: Shutterstock / Andrey Suslov You’ve probably seen the latest Boston Dynamics video, which shows one of its recent quadruped creations, the SpotMini, opening a door despite being repeatedly accosted by a company employee. Boston Dynamics’ videos are notorious for eliciting both excitement and fear across social media and the internet. Nicholas King’s new parody of the Planet Earth documentary shows herds of SpotMinis taking over the planet. And the most recent season of Netflix’s Black Mirror features a murderous, highly autonomous SpotMini look-alike. So should we be concerned about killer robots taking over? My take: I don’t expect the robot uprising anytime soon, but there’s plenty for us to worry about here — as a society, a culture, and a species. The autonomous killer robots we imagine are so fearsome because (a) they’re autonomous, (b) they’re driven to kill, for some reason, and (c) they’re armed. Looking at these, in turn, allows us to isolate the true areas of concern and understand how we can work to head off our fears. AutonomyAutonomy is a layered, even nuanced concept. At the base level, autonomy can refer simply to the ability to get from A to B without human intervention. When I see videos like the SpotMini demonstration, I immediately think back to my interview with Aaron Ames, professor of mechanical and civil engineering at Caltech, which focused on intelligent robots. Our discussion inevitably turned to the then-latest Boston Dynamics video which, at the time, featured the Atlas robot performing a backflip. Ames’ take on the level of autonomy involved essentially boiled down to: not much. “It’s a preplanned behavior, so this robot has no knowledge of its environment, in the sense that it’s not observing where those blocks are and in real time adjusting its behavior and learning how to do this behavior,” said Ames. Rather, “they put those obstacles in the memory of the computer, they preplan those behaviors, [and] they do a bunch of experiments until they get the right behavior.” In other words, there’s still lots of work to do before we see autonomously walking robots able to deftly navigate the real world. According to Ames, not only are we not there yet, but researchers don’t even agree on the right basic approach to get us there. Some, like Pieter Abbeel, advocate an approach based on end-to-end deep learning, while Ames suggests a more integrative approach. So, for the time being, you can probably evade and outrun the robot, especially on varying terrain, but they’re getting there. AgencyStill, autonomy in the sense of locomotion doesn’t quite get at what’s scary about the “autonomous killer robots” scenario. This is more about agency; the idea that the robot can have a beef with a human in the first place. I can think of a few scenarios in which a robot would have it in for a human: CONTINUE READING: venturebeat.com/2018/03/19/what-we-actually-have-to-fear-from-killer-robots/
|
|
|
Post by swamprat on Mar 21, 2018 19:36:15 GMT -6
Dashcam video of deadly self-driving Uber crash releasedDashcam video was released Wednesday night showing the dramatic and deadly crash of a self-driving Uber SUV in Arizona — as the woman operating the vehicle had her head down.
Two angles — interior and exterior camera footage — were released by the Tempe Police Department.
Officials did not release the moment the pedestrian, identified as 49-year-old Elaine Herzberg, was hit "due to the graphic nature of the impact."
www.foxnews.com/us/2018/03/21/dashcam-video-deadly-self-driving-uber-crash-released.html
|
|
|
Post by auntym on Apr 2, 2018 12:49:26 GMT -6
mysteriousuniverse.org/2018/03/job-interviews-will-soon-be-conducted-by-this-emotion-reading-russian-robot/ Job Interviews Will Soon be Conducted by This Emotion-Reading Russian Robotby Sequoyah Kennedy / mysteriousuniverse.org/author/skennedy/ March 31, 2018 In a staggering blow to the illusion of corporate empathy, a Russian robotics firm in St Petersburg have released their newest creation: an HR artificial intelligence named Vera, which will conduct job interviews and narrow down fields of potential candidates by 90% through techniques such as reading emotions. Stafoy, the small robotics firm responsible for Vera, has 300 global clients that may end up adopting Vera to handle the first stages of their recruitment processes. As if the term “human resources” wasn’t creepy enough already. Stafoy says that Robot Vera is an a.i. software meant to take some of the burden off hiring managers and recruiters by quickly vetting a large pool of candidates through phone or video interviews. Beyond just being a robot that you’re meant to talk to, the artificial intelligence further depersonalizes the job hunt through its claimed ability to handle up to 10 interviews at a time. The job-interviewing robot is being trained in a wide range of human imitating and inquisition techniques, says Stafoy. Vera is currently being trained to recognize anger, pleasure, and disappointment in the candidates being interviewed. The robot is also being trained to have complex conversations with potential candidates. According to Bloomberg News: [Vera] combines speech recognition technologies from Google, Amazon.com, Microsoft, and Russia’s Yandex. Programmers fed 13 billion examples of syntax and speech from TV, Wikipedia, and job listings to expand the software’s vocabulary and help it speak more naturally and understand responses.Because when I think “speaking naturally,” I definitely think “job listing.” What happens when someone who doesn’t exclusively talk in corporate jargon interviews with Vera? Will their CV just be stamped “insane” by a robot who learned to speak by reading monster.com postings? Robots conducting job interviews“I’m sorry, but it seems you’re a bit overqualified for this position.” As to the kinds of jobs which our new robot gatekeepers will be in charge of, it’s not futuristic, white-collar tech jobs. Robot Vera is primarily focused on “high turnover service and blue-collar jobs.” Which, coincidentally, are the same jobs in which robots are already replacing humans. It seems pretty rude for robots to offer jobs only to snatch them away again in two years. Among the companies already partnering with Robot Vera to assist them with making sure they’re never accused of having a soul are PepsiCo, Ikea, and L’Oréal. Robots taking jobs Robots are taking all the jobs, including the job of giving people jobs.Vladimir Sveshnikov and Alexander Uraksin, the founders of Stafoy, say the a.i. will only do the initial screening, call backs, and the first rounds of interviews. This, they say, eliminates 90% of potential candidates and human hiring managers should ultimately have the final say about the last 10% of candidates. Which is all well and good until version 2.0 comes out. Robot Vera is being released in the United States and Europe this year and has already conducted 2,000 interviews. mysteriousuniverse.org/2018/03/job-interviews-will-soon-be-conducted-by-this-emotion-reading-russian-robot/
|
|
|
Post by auntym on Apr 17, 2018 13:16:47 GMT -6
mysteriousuniverse.org/2018/04/dungeons-dragons-could-lead-to-smarter-more-human-like-ai/ Dungeons & Dragons Could Lead to Smarter, More Human-Like AIby Brett Tingley / mysteriousuniverse.org/author/bbtingley/April 18, 2018 If you’re not worried about the oncoming struggle between humanity and artificial intelligence, you’re not paying close enough attention. Scores of experts both within the AI field and in other disciplines have issued dire warnings about the dangers of creating super-intelligent machines which can act and make decisions autonomously and whose existence does not depend on feeble, senescent meat sacks. Still, AI researchers continue to find ways to make these artificial abominations more intelligent and human-like. At this point, you have to wonder: how can this not end badly? I mean, do AI researchers not watch or read any science fiction? There are only two ways this could end: termination or enslavement. Then again, maybe the machines already have enslaved us… It has begun. This time though, we create our masters ourselves. Much of the research into how artificial intelligence systems learn centers on teaching machines how to play games – or letting them learn on their own. Some of the most high-powered AI constructs currently dominate humans at some of the most sophisticated human games like poker, chess, and Go, not to mention absolutely annihilating us at deathmatch video games. Forget Korean prodigies in smoke-filled internet cafés – the gaming world is about to be overrun with AI systems passing as human players. It likely already is. It turns out, though, that the somewhat rigid constraints of video games or tabletop games might mean a very different type of game is better suited to training AI: open-ended role-playing games like Dungeons & Dragons. That’s according to Beth Singler, a research associate at the Faraday Institute for Science and Religion at the University of Cambridge. Singler recently penned (keyboarded?) an essay asking if role-playing tasks might be just the thing to lead to more human-like AI: Do we need a new test for intelligence, where the goal is not simply about success, but storytelling? What would it mean for an AI to ‘pass’ as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test? Singler argues that since Dungeons & Dragons and similar role-playing games task the player with switching between roles, cooperating with other unpredictable players, and improvising according to changing game conditions, they might be able to train AI which can better function alongside humans who constantly and unconsciously do the same. “Instead of beating adversaries in games,” Singler writes, “we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.” Maybe it’s best not to expose powerful, potentially malevolent AI to the world of swords and sorcery. Is more human-like AI what the world really needs though? With all of the threats facing humanity, why add a new potential adversary to the mix? Unless it’s a healer, because our party really needs one of those for our upcoming raid on that necromancer’s dungeon. mysteriousuniverse.org/2018/04/dungeons-dragons-could-lead-to-smarter-more-human-like-ai/
|
|
|
Post by auntym on Feb 2, 2019 15:01:22 GMT -6
mysteriousuniverse.org/2019/01/people-are-violently-assaulting-robots-and-scientists-are-worried/ People Are Violently Assaulting Robots and Scientists Are Worriedby Sequoyah Kennedy / mysteriousuniverse.org/author/skennedy/January 28, 2019 We just have to accept it. We’ve been dragged into the future, despite however hard we kicked and screamed. Robots are real, and they’re basically indistinguishable from the robots of our fictions. If you’re worried about these metal abominations coming to take everything you love, and would like nothing more than to go all John Connors and reduce the lot of them to slag, you’re very much not alone, and researchers are quite concerned. In a January 19th New York Times article titled “Why Do We Hurt Robots?,” Jonah Bromwich cites a number of cases where people “brutally assaulted” robots. Poor, innocent “security robots” battered and beaten. Gangs of filthy meatbags attacking driverless cars. Three teenagers in Japan beating a robot “with all their might.” A Moscow man bludgeoning a “teaching robot” with a baseball bat as it pleaded for help. People, hilariously and ill-advisedly, crashing their own driverless cars on purpose (I respect the commitment to the fight, but those things are expensive). Another security robot was—just fight back the tears; tears can come later—wrapped in a tarp and covered in barbecue sauce. Humans are such horrible creatures that we manage to hurt things that don’t even have consciousness, let alone pain receptors. Don’t look at me like that.And thus, in keeping with the new cultural norm of using words without any consideration for what they mean, the term “robot abuse” was born. Destroying robots is wrong. For the same reason that destroying someone else’s car, toaster, musical instrument, house, or computer is wrong. It’s property destruction. However hilarious covering a security robot in barbecue sauce is, a lot of money and time went into creating that machine. But that’s all it is, a machine. Yet we all anthropomorphize robots, both those scared of a robot takeover and those so deeply entrenched in robotics that they see property destruction as abuse. Cognitive neuroscientist Agnieszka Wykowska, a researcher at the Italian Institute of Technology and the editor in chief of the International Journal of Social Robotics thinks that our violence towards robots stems from the atavistic demons responsible for our tribalism and ostracization of the other. She says: “You have an agent, the robot, that is in a different category than humans. So you probably very easily engage in this psychological mechanism of social ostracism because it’s an out-group member. That’s something to discuss: the dehumanization of robots even though they’re not humans.” There’s a lot wrong with this statement. One: you can’t dehumanize something that isn’t human. Two: robots are not agents. They can only do what they are programmed to do. They literally have no agency. Asked about potential solutions to this “disturbing” problem, Ms. Wykowska offers a story about the savagery a colleague witnessed a kindergarten class express towards a robot: “Kids have this tendency of being very brutal to the robot, they would kick the robot, they would be cruel to it, they would be really not nice. “That went on until the point that the caregiver started giving names to the robots. So the robots suddenly were not just robots but Andy, Joe and Sally. At that moment, the brutal behavior stopped. So, its very interesting because again its sort of like giving a name to the robot immediately puts it a little closer to the in-group.” So the solution is to teach kids that machines are people? Fantastic. What if, and I know this might sound simple, igannant, or possibly even on-the-wrong-side-of-history, but what if we don’t want machines as part of the in-group? What if assigning humanity to a tool cheapens and debases our own self-awareness and understanding of what it means to be conscious? Man and machine future. I’m not going. Let me just step up on this here soapbox for a minute. The whole problem is this shoving the future on us whether we want it or not. The Times article cites a Brown University and M.I.T. study that showed adding a robot to a workforce reduced the number of employed humans by six. In the very near future we will have armed security robots. We know these machines are coming, we know they’re changing the world, and we know it’s unavoidable. We need to treat them like the tools they are, not like people. We need, more than anything, to learn to celebrate humanity and all conscious non-humans (if I ever see a robot messing with an elephant, that’s going to be one broken robot), not blur the lines between conscious creatures and machines. Forcing people to deny a basic truth—that machines aren’t people—will only make people angrier in the short term, and in the long term will only further the mechanization and dehumanization of the world at large. But stop breaking other people’s robots. mysteriousuniverse.org/2019/01/people-are-violently-assaulting-robots-and-scientists-are-worried/
|
|