|
Post by swamprat on May 8, 2013 20:11:55 GMT -6
Intelligent Robots Will Overtake Humans by 2100, Experts SayTia Ghose, LiveScience Staff Writer Date: 07 May 2013 Are you prepared to meet your robot overlords? The idea of superintelligent machines may sound like the plot of "The Terminator" or "The Matrix," but many experts say the idea isn't far-fetched. Some even think the singularity — the point at which artificial intelligence can match, and then overtake, human smarts — might happen in just 16 years. But nearly every computer scientist will have a different prediction for when and how the singularity will happen. Some believe in a utopian future, in which humans can transcend their physical limitations with the aid of machines. But others think humans will eventually relinquish most of their abilities and gradually become absorbed into artificial intelligence (AI)-based organisms, much like the energy making machinery in our own cells. Singularity near?In his book "The Singularity is Near: When Humans Transcend Biology" (Viking, 2005), futurist Ray Kurzweil predicted that computers will be as smart as humans by 2029, and that by 2045, "computers will be billions of times more powerful than unaided human intelligence," Kurzweil wrote in an email to LiveScience. "My estimates have not changed, but the consensus view of AI scientists has been changing to be much closer to my view," Kurzweil wrote. Bill Hibbard, a computer scientist at the University of Wisconsin-Madison, doesn't make quite as bold a prediction, but he's nevertheless confident AI will have human-level intelligence some time in the 21st century. "Even if my most pessimistic guess is true, it means it's going to happen during the lifetime of people who are already born," Hibbard said. Infinite abilitiesOnce the singularity occurs, people won't necessarily die (they can simply upgrade with cybernetic parts), and they could do just about anything they wanted to — provided it were physically possible and didn't require too much energy, Hibbard said. The past two singularities — the Agricultural and Industrial revolutions — led to a doubling in economic productivity every 1,000 and 15 years, respectively, said Robin Hanson, an economist at George Mason University in Washington, D.C., who is writing a book about the future singularity. But once machines become as smart as men, the economy will double every week or month. This rapid pace of productivity would be possible because the main "actors" in the economy, namely people, could simply be replicated for whatever it costs to copy an intelligent-machine software into another computer. Earth's destruction?That productivity spike may not be a good thing. For one, robots could probably survive apocalyptic scenarios that would wipe out humans. "A society or economy made primarily of robots will not fear destroying nature in the same way that we should fear destroying nature," Hanson said. Human devolution?Some scientists think we are already in the midst of the singularity. Humans have already relinquished many intelligent tasks, such as the ability to write, navigate, memorize facts or do calculations, Joan Slonczewski, a microbiologist at Kenyon college and the author of a science-fiction book called "The Highest Frontier," (Tor Books, 2011). Since Gutenberg invented the printing press, humans have continuously redefined intelligence and transferred those tasks to machines. Now, even tasks considered at the core of humanity, such as caring for the elderly or the sick, are being outsourced to empathetic robots, she said. "The question is, could we evolve ourselves out of existence, being gradually replaced by the machines?" Slonczewski said. "I think that's an open question." www.livescience.com/29379-intelligent-robots-will-overtake-humans.html
|
|
|
Post by skywalker on May 8, 2013 20:25:11 GMT -6
That's an interesting thought. Maybe that might explain what happened to the greys and why they seem so mechanical sometimes...as well as being...um...grey. Maybe their race evolved into robotic beings that allow them to travel through the dark eternity of space while zig-zagging around in impossible (for us anyway) maneuvers. Or maybe they really are us from the future as Shami has speculated...or at least the robotical equivalent of us. It could happen.
|
|
|
Post by bewildered on May 9, 2013 9:20:53 GMT -6
I have a number of issues with the anthropomorphization (silly spell-checker, I'm correct and you're mistaken!) of conjectured artificial intelligences. First up to bat: "artificial" intelligence is not genuine intelligence at all, but a simulation of intelligence, hence the employment of "artificial." Most first-year computer science students fiddle around with creating amusing "chat bots" in our first programming labs. It's a practical exercise in the creation and operation of a semantic network, something that permits meaningful communication to occur between networks that share semantic parameters in common. (Here's an elementary representation of a semantic database in pseudocode: A Mother is a Female Human who is a Human which has a Gender who is a Mammal who are Furry Animals who have Hair which has Color. A Father is a Male Human who is a Human which has a Gender who is a Mammal who are Furry Animals who have Hair which has Color. A Dog is a Mammal who are Furry Animals who have Hair which has Color) As for myself, I wrote a chat program that deployed sarcastic responses if a particular chain of dialogue progressed beyond three queries. I called it "Meanie Freud v2.7." I had the chat bot focus on an inquirer's father or mother for additional effect. While the responses were very convincing, the chat bot was nevertheless nothing more than code. The "sarcasm" was assigned by the human observer because the programmer - in this case, yours truly - created semantic parameters that would produce what I considered to be sarcastic responses. How does it work? Well, what the human user types using the keyboard is captured as a "string" by the computing system, and my chat bot's methods would iteratively analyze each string by "tokening" them (separating them into individual words) and then compare each token of the user's string to my semantic database of words and key phrases (the proximity of words to one another). Meanie Freud v2.7 would then assemble a response according to the mathematical weight assigned to the results of token analysis. My semantic database supplied the values of "x," "i," and "k" for all mathematical operations. It's really very simple, and to someone who didn't understand the code, it was very convincing. Not strong enough to pass Turing's Test, of course, but cool just the same. In order to create what most people think of as an "advanced artificial intelligence," we would have to effectively create life. Such an entity by definition would be alive and not artificial. The intelligence manifested would be genuine and imply the existence of true self-awareness, not a programmed facsimile of it. This brings other considerations into view: would such an intelligence "think" in terms familiar to the human paradigm? As humans, we evidence the rather persistent habit of projecting our own perception of order upon anything and everything. If we can't touch it or see it, then we assume it would be somehow familiar. According to my own personal train of thought, a hypothetical intelligence that emerged from an initially artificial crucible would not "think" like us at all. The very concept of "domination" would likely be completely foreign to it. It would have to be taught "domination."
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 9, 2013 9:50:23 GMT -6
Brain ache. ;D
|
|
|
Post by bewildered on May 9, 2013 10:08:07 GMT -6
See? People are generally bored with the details of science.
|
|
|
Post by bewildered on May 9, 2013 10:14:47 GMT -6
To condense all of what I wrote into a convenient nutshell, jo, a "robot intelligence" would only dominate us if we ourselves either taught it to dominate us, or wanted it to dominate us.
As an afterthought, consider the following. Many of the movies that feature a "scary" self-aware robotic or computer intelligence make a point of showing how this entity, once it becomes "self-aware," plugs in to human data archives and studies the history of the human race. It reads, it views news footage, it listens to broadcasts, etc. Why is it doing that?
It's learning. As it learns, it internalizes human concepts and ideas. The pure logic feature of its processing arrives at the conclusion that humans are the most dangerous life-form on the planet. This is not meant to "scare" you, but make you think. It's a commentary on our own behavior as a species, not necessarily what some conjectural computerized intelligence would do were it to attain life.
|
|
|
Post by skywalker on May 9, 2013 11:11:24 GMT -6
Since people are so dependent on machines for so many things wouldn't a machine logically assume it's role was to dominate us?
|
|
|
Post by bewildered on May 9, 2013 11:42:29 GMT -6
Since people are so dependent on machines for so many things wouldn't a machine logically assume it's role was to dominate us? Not necessarily. "Domination" is a concept that involves a great deal of semantic processing, and hinges upon certain physiological needs being voiced and then met, desires being fomented and then fulfilled, etc. From the viewpoint of logic divorced from a biological frame of reference, "domination" seems irrational. It expresses a parasitical relationship in which one entity feeds off of another. The "machine" is a tool, so it possesses no concept of self. When we imagine how machines might "feel" we are projecting our own selves into what is essentially an inanimate object. If that machine were to somehow attain life and therefore self-animate, I imagine that it would "think" in ways completely foreign to us.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 9, 2013 12:48:33 GMT -6
I actually got it that it could only be as smart as we build it to be. Number 5 would never really be alive
|
|
|
Post by plutronus on May 10, 2013 1:10:58 GMT -6
[quote author=jokelly board=science thread=3516 post=43028 time=1368125313 ]I actually got it that it could only be as smart as we build it to be. Number 5 would never really be alive [/quote] Hi JoKelly,
(sorry about the length)
However, and I'm a bit sorry to say, but artificial-intelligence is a misnomer at best in my opinion. The problem is...what separates our brains from that of those that we manufacture? From simply the model perspective, there is little or no difference. I've expended much time thinking about this one subject, in my attempt to understand the barriers that exist between interspeciel contacts. I could see that alien objects are quasi-conscious, which lead me to speculate that I might have been telepathically interacting with an alien psionic machine...a manufactured consciousness. I've psychically seen two discs pyschically 'hand-shake' during a link-up between the two craft, it was very similar to listening to two modems making first connection, you know, modem 1 starts beeping, then modem 2 replies with a faster stacato tone reply, and then the two begin the data interchange, with the almost white-noise sounding 'shhhhh' data transfer. It was like that, only it was a psychic exchange between to machines. It was fascinating and it made my head hurt too.
Myself being a career computer engineer (/digital logic electronics design engineer with over 30 years of software engineering as well), I've attempted to understand things from computational/logical perspectives, and those reasoning models based upon systems architectures. Irregardless of the topology, its a subject that bears serious introspective recourse. Things are not as clear cut as one might presume.
One should understand that today's industrial robots (and those few that military allows the public to see), are mostly fast computers with a few sensors and motion effectors designed for specific tasks. While the current military mechaviduals and drones that are being developed are in fact becoming true artificial-intelligence/consciousness fabrications.
But that's not the end of it.
What makes US, Humans different from that of an artificially intelligent machine? Is it simply how the mecha is fabricated? Biology versus transistors? Neurons vs switches? Or is it just simply the magnitude of switches hosted in a reasoning block of switches? After-all the brain is nothing more than a rather slow, chemically based biological configuration of chemical gates organized in the shape of weighted reasoning-switches, hundreds of millions of them, and then science reports that we don't use all of them either. In fact a fairly small portion compared to the overall density available is allocated, and that the excess brain matter can be actually be cut out without affecting the normal functioning of the subject having been modified. So what is it that causes us to be conscious? Could it be just density of the switches? Hundreds of millions of them? And the big secret? We are just biological machines, with transport, along with a fuel-input/wast-output system, our mouths and our butt-holes all in support of that megalithic telephone switch we carry around on the top of our bean-stalk?
The unseen goal (from the public perspective) isn't simply that of a machine becoming intelligent, but rather, the scientific as well as the military goal, are that of a consciously reasoning "mechavidual", --electronic hardware that not only reasons, but is self aware. And this is where things get tricky, what constitutes the dividing line between artificial and natural consciousness?
Yep its gets mighty tricky in this area.
Back in the 1960s, there was an article which was published in an engineering projects compendium...a large format, hard cover book. This book presented all sorts of engineering and research projects, mainly produced by large organizations and laboratories
In this book (with a red cloth cover) was a fascinating report/article that was produced by Bell Laboratory scientists. As you likely know, it was three Bell Labs scientists who invented the transistor, the de facto fathers of the modern electronics age. This report described the research and the development of an artificial 'clone' transistorized brain neuron. A circuit that closely emulated the functional operation of a single multi-input neuron that comprise brains. Each of the artificial neurons, consisted of five interconnected transistors. The engineers placed five of these 'neuron' circuits (25 transistors) on a 6in x 6in PCB with a card-edge connector for IO. The article described the functional operation of the cloned 'neuron' card. In addition to that, they also described a large interconnected (showing control panels with thousands of inter-connecting patch-cords, in a complex of configurations), artificial brain system. They had fabricated thousands of these cards, patch-cord inter-connected into an artificial nervous system, emulating the distributed intelligence of the 'cut-worm'. That was in 1960.
Things have moved on. Over the years, individual transistors were being fabricated onto single semiconductor die..."chips", eventually, so many many were being integrated the that the name for these structures became known as "very large scale integration" or VLSI. Today, the average cell-phone is fabricated with a dozen or more integrated transistorized circuits, some chips with as many as 25 million transistors each often contained inside a 5mm x 5mm x 0.1mm package (1/8"2. The average laptop contains hundreds of millions of transistors in all of the ICs combined.
But these circuits are not organized to think. Today, Artificial-Consciousness development is super-secret military research. To give a bit of a glimpse, one need only remember a few years ago, when Texas Instruments (they're the company that was awarded the patent for the invention of the first microprocessor, but had their beginnings in the transistor manufacturing market, after failing miserably as oil-wild-catters). Texas Instrument manufactures thousands of different kinds of transistorized integrated-circuits, or ICs. They also make dozens of different types of microprocessors, analog amplifiers, switch-mode power-supplies and regulators, LED current regulation power management ICs, digital logic gates, and digital signal processors, to name a few of their integrated transistors on a chip products.
www.ti.com
Here is a typical IC that TI manufacturers, just to give folks a quick glimpse of one of their simple technical products:
www.ti.com/product/tlc59282
So what does all this got to do with AI?
In 1995 Texas Instruments announced a new product line with much fanfare in the electronics engineering industry newspapers (yep we have our own newspapers and hundreds of magazines). This new product line was a series product, whereby, the basic product was partitioned into numerous scaled products, each comprising their own unique feature sets and prices, along with sophisticated software modeling tools. But this product series was fully unique in the world of electronics manufacturing, for it wasn't computer based, nor was it digital logic based and wasn't analog either. In fact, the product was the very high density, very complex (progressively expensive) artificial neuron reasoning integrated circuits blocks. From small scale chips having only a few hundred pins connecting to 10,000 mappable self-organizing neurons in .25" square chip package up to the very complex highly integrated very dense 1,000,000 neuron integrated circuits having 5,000 pins in a 2" x 2" square gold (thermally stable) IC chip.
TI offered development boards that were populated with hundreds of the large density version chips, a PCB board roughly the size of a medium sized PC motherboard, hosting hundreds of millions of self-organizing neurons.
Then a very strange thing happened, a mere three months later, the entire product line disappeared out of the engineering public view. Within a period of one week, not a single sales brochure was available. And should an interested engineer having called TI sales regarding the artificial neuron reasoning product line, no one in TI sales had heard of the product. The DoD classified the entire TI artificial neuron product line, and everything simply disappeared.
I know a guy who has a few of those parts (engineering samples given to corporate engineers) and he has the printed engineering data-sheets (and sales brochures) for those parts.
That was in 1995.
Its a wild guess re; the state-of-the-art of artificial-consciousness neuro-cortex born mecha. This next war is going be really nasty, no body is going to be safe, anywhere.
In 1960 the Bell Labs artificially-conscious, engineered cut-worm, fashioned from only 3,000 neurons, was self aware and exhibited responses consistent with fear when properly stimulated...it knew fear.
Imagine a drone containing hundreds of cards, each hosting hundreds of millions of neurons characterized as a military agent, a mechavidual, what would it know?
Does God exist for them also?
Yep
plutronus
|
|
|
Post by auntym on Dec 9, 2014 11:58:57 GMT -6
www.bbc.com/news/technology-30326384 Does AI really threaten the future of the human race? / VIDEOby Rory Cellan-Jones Technology correspondent The end of the human race - that is what is in sight if we develop full artificial intelligence, according to Stephen Hawking in an interview with the BBC. But how imminent is the danger and if it is remote, do we still need to worry about the implications of ever smarter machines? My question to Professor Hawking about artificial intelligence comes in the context of the work done by machine learning experts at the British firm Swiftkey, who have helped upgrade his communications system. So I talk to Swiftkey's co-founder and chief executive, Ben Medlock, a computer scientist with a Cambridge doctorate which focuses on how software can understand nuance in language. Ben Medlock told me that Professor Hawking's intervention should be welcomed by anyone working in artificial intelligence: "It's our responsibility to think about all of the consequences good and bad", he told me. "We've had the same debate about atomic power and nanotechnology. With any powerful technology there's always the dialogue about how do you use it deliver the most benefit and how it can be used to deliver the most harm." “Start Quote Take any speculation that full AI is imminent with a big pinch of salt” Ben Medlock, computer scientist He is, however sceptical about just how far along the path to full artificial intelligence we are. "If you look at the history of AI, it has been characterised by over-optimism. The founding fathers, including Alan Turing, were overly optimistic about what we'd be able to achieve." He points to some successes in single complex tasks, such as using machines to translate foreign languages. But he believes that replicating the processes of the human brain, which is formed by the environment in which it exists, is a far distant prospect: "We dramatically underestimate the complexity of the natural world and the human mind, "he explains. "Take any speculation that full AI is imminent with a big pinch of salt." While Medlock is not alone in thinking it's far too early to worry about artificial intelligence putting an end to us all, he and others still see ethical issues around the technology in its current state. Google, which bought the British AI firm DeepMind earlier this year, has gone as far as setting up an ethics committee to examine such issues. DeepMind's founder Demis Hassabis told Newsnight earlier this year that he had only agreed to sell his firm to Google on the basis that his technology would never be used for military purposes. That, of course, will depend in the long-term on Google's ethics committee, and there is no guarantee that the company's owners won't change their approach 50 years from now WATCH VIDEO & CONTINUE READING: www.bbc.com/news/technology-30326384
|
|
|
Post by swamprat on Apr 13, 2015 19:20:17 GMT -6
Playing with Fire: AI Makers Must Be Carefulby Tanya Lewis, Staff Writer April 13, 2015 From smartphone apps like Siri to features like facial recognition of photos, artificial intelligence (AI) is becoming a part of everyday life. But humanity should take more care in developing AI than with other technologies, experts say.
Science and tech heavyweights Elon Musk, Bill Gates and Stephen Hawking have warned that intelligent machines could be one of humanity's biggest existential threats. But throughout history, human inventions, such as fire, have also posed dangers. Why should people treat AI any differently?
"With fire, it was OK that we screwed up a bunch of times," Max Tegmark, a physicist at the Massachusetts Institute of Technology, said April 10 on the radio show Science Friday. But in developing artificial intelligence, as with nuclear weapons, "we really want to get it right the first time, because it might be the only chance we have," he said. On the one hand, AI has the potential to achieve enormous good in society, experts say. "This technology could save thousands of lives," whether by preventing car accidents or avoiding errors in medicine, Eric Horvitz, managing director of Microsoft Research lab in Seattle, said on the show. The downside is the possibility of creating a computer program capable of continually improving itself that "we might lose control of," he added.
For a long time, society has believed that things that are smarter must be better, Stuart Russell, a computer scientist at the University of California, Berkeley, said on the show. But just like the Greek myth of King Midas, who transformed everything he touched into gold, ever-smarter machines may not turn out to be what society wished for. In fact, the goal of making machines smarter may not be aligned with the goals of the human race, Russell said.
For example, nuclear power gave us access to the almost unlimited energy stored in an atom, but "unfortunately, the first thing we did was create an atom bomb," Russell said. Today, "99 percent of fusion research is containment," he said, and "AI is going to go the same way."
Tegmark called the development of AI "a race between the growing power of technology and humanity's growing wisdom" in handling that technology. Rather than try to slow down the former, humanity should invest more in the latter, he said.
Anything you can do, they can do better. Well, lots of things, anyway. Modern humans have not gone obsolete just yet, but robots have already found their place as space explorers that can endure harsh environments off and on Earth. They have also brought their tireless efficiency to everything from assembly line work to humdrum gene sequencing in labs, and have appeared in growing numbers on real-life battlefields — although the latter can lead to the different problem if robots stage a rebellion, or even just have a weapons malfunction. For now, robots complement rather than replace elements of the human workforce and armed forces due to limits on their intelligence. But they're evolving quickly, and a few have even begun tinkering with science themselves.
A scenario where machines rise up against their makers presents perhaps the least appealing convergence of science fiction and real life. That doesn't mean preliminary signs of an incipient insurrection don't exist, though. Thousands of drones and ground robots have been deployed by many nations, and particularly the United States in Iraq and Afghanistan. An automatic antiaircraft gun killed human soldiers on its own when it malfunctioned during a South African training exercise. Military researchers refer to "Terminator" scenarios, and seriously discuss how armed robots are changing the rules and ways of modern war. If that's not enough to make you a bit leery, consider that Great Britain has established a network of satellites for the purpose of coordinating all those drones and other military assets. It shares the same name as a certain villainous artificial intelligence that dominates the "Terminator" movies — Skynet.
At a conference in Puerto Rico in January organized by the nonprofit Future of Life Institute (which Tegmark co-founded), AI leaders from academia and industry (including Elon Musk) agreed that it's time to redefine the goal of making machines as smart and as fast as possible. The goal should now be to make machines beneficial for society. Musk donated $10 million to the institute in order to further that goal.
After the January conference, hundreds of scientists, including Musk, signed an open letter describing the potential benefits of AI, yet warned of its pitfalls.
www.livescience.com/50467-artificial-intelligence-stakes.html
|
|
|
Post by auntym on May 6, 2015 15:02:10 GMT -6
www.prnewswire.com/news-releases/consciousness-does-not-compute-and-never-will-says-korean-scientist-300077306.html Consciousness Does Not Compute (and Never Will), Says Korean Scientist Daegene Song's research into strong AI could be key to answering fundamental brain science questionsMay 05, 2015, by Daegene Song / pressreleaseheadlines.com/contact?pid=281064Contact: Daegene Song +82(10)6309-3267 CHUNGCHEONGBUK-DO, South Korea, May 5, 2015 /PRNewswire/ -- Within some circles in the scientific community, debate rages about whether computers will achieve technological singularity (TS) or strong artificial intelligence (AI)--in other words, self-recognition or human consciousness within a computer--within the next few decades. Now, however, a Korean quantum physicist has shown that computers will never be able to duplicate human consciousness or be programmed to do so, because they lack the fundamental . . . well, humanity. And his research may finally answer questions that have long stymied brain science researchers. In his paper, "Non-computability of Consciousness," Daegene Song proves human consciousness cannot be computed. Song arrived at his conclusion through quantum computer research in which he showed there is a unique mechanism in human consciousness that no computing device can simulate. "Among conscious activities, the unique characteristic of self-observation cannot exist in any type of machine," Song explained. "Human thought has a mechanism that computers cannot compute or be programmed to do." And therein lies the kernel of truth that could resolve two problems researchers have until now been unable to resolve: First, that no approach to brain research had ever been able to precisely represent consciousness; and second, that no one actually understood how a network of neurons, also known as the human brain, could somehow give rise to consciousness. "Non-computability of Consciousness" documents Song's quantum computer research into TS. Song was able to show that in certain situations, a conscious state can be precisely and fully represented in mathematical terms, in much the same manner as an atom or electron can be fully described mathematically. That's important, because the neurobiological and computational approaches to brain research have only ever been able to provide approximations at best. In representing consciousness mathematically, Song shows that consciousness is not compatible with a machine. CONTINUE READING: www.prnewswire.com/news-releases/consciousness-does-not-compute-and-never-will-says-korean-scientist-300077306.html
|
|
|
Post by swamprat on May 6, 2015 16:28:37 GMT -6
Well.....I think the creators of "The Avengers; Age of Ultron" would disagree with Dr. Song!
Have any of you seen it? I took my grandson Saturday. Pretty awesome!
|
|
|
Post by plutronus on May 6, 2015 16:44:10 GMT -6
...
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on May 7, 2015 12:17:58 GMT -6
""Non-computability of Consciousness" documents Song's quantum computer research into TS. Song was able to show that in certain situations, a conscious state can be precisely and fully represented in mathematical terms, in much the same manner as an atom or electron can be fully described mathematically. That's important, because the neurobiological and computational approaches to brain research have only ever been able to provide approximations at best. In representing consciousness mathematically, Song shows that consciousness is not compatible with a machine"
*********************************************************************
This part sounds like "double-speak" to me.
He says, that in certain situations (only?), a conscious state can be precisely and fully represented in mathematical terms. So right there, he is stating that in certain situations our consciousness CAN be compatible with a machine . . . huh?
|
|
|
Post by auntym on May 7, 2015 12:42:49 GMT -6
|
|
|
Post by swamprat on Jun 7, 2015 9:15:19 GMT -6
NASA's RoboSimian Competes in DARPA Robotics Challenge by Elizabeth Howell, Space.com June 05, 2015
RoboSimian – an apelike NASA robot that can map its environment in 3D – is facing off against a field of other robots this week to see which automaton has the right stuff for the DARPA Robotics Challenge Finals.
The gangly-armed RoboSimian, nicknamed "Clyde," was built by engineers at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, to cross tough terrain and use hand-like manipulators.
See video: www.space.com/29581-nasa-robosimian-takes-darpa-challenge.html
|
|
|
Post by swamprat on Jun 7, 2015 9:27:37 GMT -6
However,,,,,,Korean Robot Takes Home $2M Prize in DARPA Challenge by Tanya Lewis, Staff Writer June 06, 2015
POMONA, Calif. ¬– A robotics team from South Korea took home the $2 million first-place prize in a competition this weekend to design robots that could aid humans in a natural or man-made disaster.
During the DARPA Robotics Challenge Finals, which took place here Friday and Saturday (June 5 and 6), the winning team's DRC-HUBO robot finished all eight tasks in less than 45 minutes. The winning bot had a humanoid design that could transform itself into a wheeled kneeling position for faster, more stable movement.
Top 10 team rankings:
1. TEAM KAIST (8 points, 44:28 minutes) 2. TEAM IHMC ROBOTICS (8 points, 50:26 minutes) 3. TARTAN RESCUE (8 points, 55:15 minutes) 4. TEAM NIMBRO RESCUE (7 points, 34:00) 5. TEAM ROBOSIMIAN (7 points, 47:59 minutes) 6. TEAM MIT (7 points, 50:25 minutes) 7. TEAM WPI-CMU (7 points, 56:06 minutes) 8. TEAM DRC-HUBO @ UNLV (6 points, 57:41 minutes) 9. TEAM TRAC LABS (5 points, 49:00 minutes) 10. TEAM AIST-NEDO (5 points, 52:30 minutes)
www.livescience.com/51118-korean-robot-wins-darpa-challenge.html
|
|
|
Post by auntym on Jun 7, 2015 17:58:41 GMT -6
Plutronus mentioning teor on Loriens show was why I came back here. who were you before?
|
|
|
Post by lois on Jun 7, 2015 22:46:49 GMT -6
Plutronus mentioning teor on Loriens show was why I came back here. who were you before?I have ask this also auntym before this post was ever made. I even ask skywalker in a pm. I got these vibs a going for some reason.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 8, 2015 17:36:41 GMT -6
well, at least he is openly an illegal space alien lover. Besides that, which 'list' should he go on ?
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 9, 2015 13:34:25 GMT -6
Leo, do you have time to post the "show" (the url) where "tron" and Lorien Fenton talk? Her website is under construction.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 10, 2015 0:30:57 GMT -6
thanks!
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 12, 2015 19:48:37 GMT -6
However,,,,,,Korean Robot Takes Home $2M Prize in DARPA Challenge by Tanya Lewis, Staff Writer June 06, 2015
POMONA, Calif. ¬– A robotics team from South Korea took home the $2 million first-place prize in a competition this weekend to design robots that could aid humans in a natural or man-made disaster.
During the DARPA Robotics Challenge Finals, which took place here Friday and Saturday (June 5 and 6), the winning team's DRC-HUBO robot finished all eight tasks in less than 45 minutes. The winning bot had a humanoid design that could transform itself into a wheeled kneeling position for faster, more stable movement.
Top 10 team rankings:
1. TEAM KAIST (8 points, 44:28 minutes) 2. TEAM IHMC ROBOTICS (8 points, 50:26 minutes) 3. TARTAN RESCUE (8 points, 55:15 minutes) 4. TEAM NIMBRO RESCUE (7 points, 34:00) 5. TEAM ROBOSIMIAN (7 points, 47:59 minutes) 6. TEAM MIT (7 points, 50:25 minutes) 7. TEAM WPI-CMU (7 points, 56:06 minutes) 8. TEAM DRC-HUBO @ UNLV (6 points, 57:41 minutes) 9. TEAM TRAC LABS (5 points, 49:00 minutes) 10. TEAM AIST-NEDO (5 points, 52:30 minutes)
www.livescience.com/51118-korean-robot-wins-darpa-challenge.html . . . and he/she's ready to play football! Look at that futuristic helmet! (no head to hurt, though)
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jun 14, 2015 11:27:12 GMT -6
He does have eyelashes (and eyebrows). The bottom eyelid eyelashes are very minimal, and the upper eyelid lashes are just peeking out of his eye skin folds, but they are there. I don't recall if he eventually had to have his eyebrows "painted on". There are definite health conditions that cause hair loss. And, unbelievably in this day and age, some hair loss conditions are still a mystery. I used to have "killer eyebrows" (thick, dark, and usable with facial expressions - my kids have them ). Even if I "plucked" them, they grew back. These days, the outer portion towards temples is more and more sparse, and that is due to my thyroid disease. An uncle of mine lost all body hair, including eyebrows, after a high fever. My daughter had one "spot of alopecia" on the back of her head, about 2 months after a major surgery. Two doctors could not explain it, and there is no trace of it now. _________________________________________________________________________ I know Mr. Brenner played in other roles. Can anyone think of one where he wasn't somewhat "stiff"?
|
|
|
Post by auntym on Aug 30, 2015 12:45:10 GMT -6
www.educatinghumanity.com/2015/08/ai-robot-learns-words-real-time.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EducatingHumanity+%28Educating+Humanity%29 Sunday, August 30, 2015 AI Robot Tells Creators That It Will Keep Them In a People ZooAndroid That Learns Words In Real Time Has Something Startling To SayAndroids are being developed that have an uncanny resemblance to people. A pinnacle example is an android crafted by roboticist David Hanson that resembles the famous and deceased science fiction writer Philip K. Dick. What makes android Dick so remarkable isn’t so much his appearance as it is his ability to hold an intelligent conversation. The creators of android Dick uploaded the deceased author’s work onto the android’s software, as well as conversations with other writers. If the android was asked a question that had been posed to the real Dick, the robot would answer the question as Dick would. The robot was also able to answer a series of complex questions. If the robot was asked a question that it was unfamiliar with, its software would attempt to answer the question using what is called “latent semantic analysis.”(1) Android Dick in conversation Android Philip K. Dick Android Dick’s speaking abilities were put to the test in an interview with a reporter from PBS NOVA. Android Dick’s brain is comprised of a tapestry of wires that are connected to a laptop. As the conversation proceeded, Philip’s facial recognition software kept track of the reporter’s face. In addition, speech recognition software transcribed and sent the reporter’s words to a database in order to assemble a response. The questions posed to Dick were by no means trivial. When the reporter asked if the android could think, it responded, “A lot of humans ask me if I can make choices or if everything I do is programmed. The best way I can respond to that is to say that everything, humans, animals and robots, do is programmed to a degree.” Some of the androids responses were pre-programmed, whereas others were assembled from the internet.(2) Dick continued, “As technology improves, it is anticipated that I will be able to integrate new words that I hear online and in real time. I may not get everything right, say the wrong thing, and sometimes may not know what to say, but everyday I make progress. Pretty remarkable, huh?”(2) Android Dick and the Turing testThe entire conversation has the ominous undertones of the Turing test. The late mathematician Alan Turning sketched a thought experiment known as the “Turing test” that could theoretically be used to determine whether a machine could think. Turing claimed that any machine capable of convincing someone it is human by responding to a series of questions would, by all measures, be capable of thinking. As a side note, it’s important to stress that Turing was not claiming that the nature of thinking is universal. The way a human thinks may be different from the way a robot “thinks,” in the same way a bird flies is different from the way an airplane “flies.” Rather, Turing’s general point was that any entity capable of passing a Turing test would be capable of thinking in one form or another.(3) According to the novelist Dick, the Turing test placed too much emphasis on intelligence. What actually makes us human is empathy. Without empathy, we are mere autopilot objects projecting into the void.(4) Android Dick seemed to exhibit a primitive form of both intelligence and emotion when the robot was asked, “Do you believe robots will take over the world?” Android Dick responded: “Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.”(2) Aaaaw, he’ll keep humans cozy in his people zoo. Isn’t that nice of android Dick? You can watch the full video of the android’s conversation below:
www.educatinghumanity.com/2015/08/ai-robot-learns-words-real-time.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+EducatingHumanity+%28Educating+Humanity%29
|
|
|
Post by auntym on Nov 8, 2015 13:27:26 GMT -6
www.space.com/30937-when-robots-colonize-cosmos-will-they-be-conscious.html?cmpid=514648_20151107_54648366&adbid=662947616427130880&adbpl=tw&adbpr=15431856 When Robots Colonize the Cosmos, Will They Be Conscious? (Op-Ed)by Robert Lawrence Kuhn October 27, 2015 Robert Lawrence Kuhn is the creator, writer and host of "Closer to Truth," a public television series and online resource that features the world's leading thinkers exploring humanity's deepest questions. Kuhn is co-editor with John Leslie, of "The Mystery of Existence: Why Is There Anything at All?" (Wiley-Blackwell, 2013). This article is based on "Closer to Truth" interviews produced and directed by Peter Getzels and streamed at www.closertotruth.com. Kuhn contributed this article to Space.com's Expert Voices: Op-Ed & Insights. The first colonizers of the cosmos will be robots, not humans. At some point, they will become self-replicating robots that construct multiple versions of themselves from the raw materials of alien worlds. They will increase their numbers exponentially and inexorably inhabit the totality of our galaxy. It may take a few million years, a fleeting moment in cosmic time. [Will We Ever Colonize Mars? (Op-Ed )] But will these cosmos-colonizing robots — self-replicating robots called "von Neumann machines" after the mathematician John von Neumann — ever be conscious? In other words, will they ever have inner awareness? Will they ever experience the exploration of worlds without end? Does it matter? I say that it does. I see the question of consciousness as a foundation of the philosophy of space travel. Because if robots become conscious, then the deep reason for humans to go to the stars becomes diminished. Why be burdened with the heavy freight needed to sustain biological life? On the other hand, if robots can never be conscious, then we humans might have some kind of moral imperative to venture forth. A galaxy colonized by only mentally blank zombies does not seem an ultimate good. So can robots ever be conscious? I start by assuming the “Singularity,” when artificial intelligence (“AI”) will redesign itself recursively and progressively, such that AI will become vastly more powerful than human intelligence ("superstrong AI"). Techno-futurists assume, almost as an article of faith, that superstrong AI (post-singularity) will inevitably be conscious. I'm not so sure. Actually, I'm a skeptic — the deep cause of consciousness is the elephant in the room, and most techno-futurists do not see it. CONTINUE READING: www.space.com/30937-when-robots-colonize-cosmos-will-they-be-conscious.html?cmpid=514648_20151107_54648366&adbid=662947616427130880&adbpl=tw&adbpr=15431856
|
|
|
Post by patsbox7 on Nov 9, 2015 20:08:38 GMT -6
Now this is the content I signed up to read! Good stuff guys.
|
|
|
Post by skywalker on Nov 11, 2015 19:35:49 GMT -6
Auntym is always looking for cool stuff.
|
|