07-11-2019, 05:34 PM
Follow the money!
|
A.I.
|
|
12-11-2019, 10:39 AM
There are two camps with AI that have been raging ever since Turing and Gödel.
The first is "Hard AI" which says that it is possible, in principle, to produce a machine that is truly intelligent. The diametric opposite says that it is impossible. And there are several gradations between these two limiting cases. The books to read for these two views are Gödel, Escher and Bach by Douglas Hofstadter, and The Emperor's New Mind (and Shadows of the Mind) by Roger Penrose. Craig
12-11-2019, 10:50 AM
(12-11-2019, 10:39 AM)Craig Wrote: There are two camps with AI that have been raging ever since Turing and Gödel. GEB is an excellent, though not easy, read and was way ahead of it's time when originally published (1979) - fascinating. Penrose is just all-round interesting Renaissance man - he has so many strings to his bow and he write so well - a much easier read than GEB. Strongly recommend both books.
sıʌǝɹq ɐʇıʌ `ɐƃuol sɹɐ
ʞɔıu
12-11-2019, 12:02 PM
Hofstadter was a student of Daniel Dennett, who's book "Consciousness Explained". Another excellent read.
Apparently, although we think of decisions and actions as immediate, the brain is gearing up between 1/3 and 1/2 a second before. There is an excellent TED talk by Dennett: "Philosopher Dan Dennett makes a compelling argument that not only don't we understand our own consciousness, but that half the time our brains are actively fooling us." https://www.ted.com/talks/dan_dennett_on...en#t-30176
The main point being - if we have no idea why a bunch of cells, which individually have no consciousness at all, collectively become conscious, how can we possibly mimic that artificially?
The key thing is that a self education program like Deep Mind, running simultaneously on massive datacentre processor farms, can teach itself to do stuff (Like Go and chess) given the rules, it is nonetheless an argorithmic system. It is a program that has been coded by humans to do this trick. And admittedly it is very impressive. But the conscious human mind does not remotely behave algorithmically. And it is that non-algorithmic behaviour that confers intelligence. So Artificial Intelligence, being an algorithm designed by conscious and intelligent humans, is not possible in principle. It is a self educating algorithm with access to massive processing power and huge tranches of memory. But hey - the guy that was on The Life Scientific sold his company to Microsoft for $800m - so he can call his program what he likes.
13-11-2019, 01:56 PM
Again a little bit out of my depth (to say the least LOL), but I think the thing is that we are talking about a large number of cells. Probably an extremely large number of interacting algorithms so effectively a chaotic system. We are then in a position of not seeing the wood for the trees. There is maybe no reason why the human brain is not behaving algorithmically, just that, at least at the moment, we don't have the knowledge to be able to break the complex system into individual algorithms / algorithmic sub-steps. We are only really in the stages of separating the brain into compartments for function so it is still at the black box stage with inputs, outputs and indeterminate processing. As such AI is currently really just emulating, not implementing, the processing.
Tracy
13-11-2019, 02:40 PM
There is certainly a missing theory of how the human mind works. Not just the mechanics of the brain - which neurons are connected to which - or even the centres of the brain responsible for various functions - but what about the whole confers consciousness. Perhaps the most studied brain in history is Einstein's. And structurally it is identical to yours and mine.
So personality, and the propensity for genius, seems to have nothing to do with the physical structure of the brain. Or at least nothing we can detect even with the battery of diagnostic methods we now have. The difference between the human mind and any possible AI, is that the mind is capable of entirely original thinking. A good example is Gödel's theorem. Very simply that says that any complete mathematical structure is inconsistent, and any consistent mathematical structure is incomplete. So you cannot write a complete, perfect book containing all of maths from a fundamental set of axioms (the same way that Euclid did for geometry). Gödel says it is impossible to do so. That is just one example from the world of mathematics. But it is also starkly evident in the creative arts - painting and sculpture, writing books and plays, music - basically everything to do with creativity. These things seem to be unique to the human mind. Of course that raises the question - where will we be in 1000 years? Always assuming we do not unleash some horror on ourselves or boil ourselves to death through climate catastrophe in the meantime. I think that we will end up with augmented consciousness, where the human mind exists in a fusion with artificial systems. Back to reality, and flipping the argument on its head - if a machine, a true AI, *were* to attain consciousness, would you be committing murder if you switched it off? Craig
13-11-2019, 05:21 PM
(This post was last modified: 13-11-2019, 05:24 PM by Mike Watterson.)
"would you be committing murder if you switched it off?"
1) It's like asking would it be murder to crush an alarm clock. It's simply not possible for such a situation to arise. All popular conceptions about A.I. are based on almost total ignorance of how a Computer works. In ST-TNG, "Data" does stuff impossible for a computer and sometimes physically impossible (like Superman or the Six Million Dollar Man doing stuff that's pure magic, like lifting a car without tipping over ) but Data often can't do stuff easily at least simulated on a computer. A computer is just an electronic implementation of a stored program execution unit that can in theory be made mechanically. 2) We kill rooks. They are certainly smart, sentient, conscious, tool using, problem solving creatures that have a vocabulary. Actually many animals and birds have a vocabulary, but none have been yet shown to have a language. Would you be committing murder in this fantasy universe where a machine can be consciousness by turning it off? No, because that is equivalent to anaesthesia or being knocked out. The machine would recover when turned on. Some server computer systems have onboard batteries or a UPS and can save RAM to non-volatile storage (hibernation). Most laptops and many tablets or phones can "hibernate" indefinitely if the battery is low. The power switch is often simply a signal to the Operating System. So you would need to destroy the storage to "kill" it. It's a concept of Transhumanism (really a religion with no basis in fact, it's a faith) that a snap shot (storage) can be made of a "Consciousness". In the hypothetical case of the conscious machine, wouldn't it make frequent off site backups and thus be rather hard to "kill"? Would it be murder to destroy the storage and all the backups? A court would have to decide. Killing a person can be an accident, not even your fault at all (engine on car seizes, tyre blow out), negligence or some other manslaughter, self defence, murder but not premeditated, part of state sponsored genocide or a war crime, planned pre-meditated murder etc. Part of my teenage attraction to programming was reading SF and thinking someday I'd program or help program an A.I. I believe now that A.I. is fantasy, a wishful thinking to create gods in our own image. It's a motif of a certain strand of SF, a modern version being by Iain Banks (as Iain M. Banks). It's certainly possible to fool a casual human with a chat bot, since the 1960s actually. Modern ones are pretty similar, because I've tested both. The newer ones have much bigger rule sets and databases and still have no understanding of language. You can even simulate "emotion", though that's not very useful. I think the Turning Test wasn't exactly ever meant to be a real test of AI, but curiosity about how easy it might be to fool someone. A chat bot can pass the Turning test if the human is NOT an expert and doesn't suspect. Social media is full of bots, doing propaganda and selling. Easy for any expert to spot. Most hardly better than 1960s Eliza or 1990s Alice because people don't particularly interact. I wrote a book examining the idea. One team builds the best ever chat bot with the aim of it being an interface to a search engine. This is indeed close to what Watson winning "Jeopardy" was. Later IBM Watson implementations actually didn't use much more than the branding and were a failure for the Medical Applications in the USA. The other "team" are Fay and use the forbidden Fetch magic. They erase the memories of a deer and shape change it to seem human and use magic to give it speech. It's an idea in old Celtic myths. One twist is that via DNA it's given the appearance of a real young woman, who has a curious background, so the construct is a bit like a clone*. The later Norse Legends have Ellenfolk, hollow people, they serve Iðunn when she's kidnapped. A Golem is a similar idea as is Mary Shelly's Dr. Frankenstein's "monster". The Fay team find their "Alice Watson" isn't very useful because it only does what you tell it, and even that needs it to be taught the "rules" first. They do indeed have qualms about "killing it" after the experiment. A solution is found. It would be a major spoiler to explain what the solution is. At one point "Alice Watson" attends a programming course. She/It is totally useless at programming and only as good at spotting mistakes as the Compiler. Likely the book will be out in 2024 or so as there are other titles in the series. *Clones are another misused SF trope. No clone has the memories or personality of the original DNA donor. Even Identical twins are much less identical than most people realise. They don't share fingerprints. You really don't want to know about USA separation of twins to different foster parents for "research".
13-11-2019, 06:01 PM
Always bear in mind Clarke's Laws. As formulated by Arthur C Clarke: https://en.wikipedia.org/wiki/Clarke%27s_three_laws
Notably: Quote:When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. At a lighter level, the song "They all Laughed": https://www.youtube.com/watch?v=VpcEhFlwYi0 Quote:........They all laughed when Edison recorded sound And there are still people who beleive the earth is flat and that man didn't go to the moon. Just because it seems difficult now, even far fetched, doesn't mean it isn't possible. And it might be possible sooner than you think. AI looks dubious and debateable right now. In 100 years time it may prove to be a dead end, or it may be part of everyday life. Look back 100 years to the early days of broadcasting. Who then would have predicted the communications we have now.
www.borinsky.co.uk Jeffrey Borinsky www.becg.tv
13-11-2019, 07:23 PM
(This post was last modified: 13-11-2019, 07:26 PM by Mike Watterson.)
Some things/ideas are not needing technology but different laws to the universe.
Actually a few people did see what wired & wireless transmission would bring. Sound broadcasting predated commercial radio by many years. Entertainment was provided by the telephone network in Hungary before radio broadcast. Wireless electronic versions of cinema was envisaged as early as cinema! Text based Telegraph networks had replaced morse telegraphy nearly 100 years ago, with wireless versions (text and image based) before WWII. Ada Lovelace. Very much of what we have grew from Victorian era vision and experiment, even semiconductors, which had a 35 year lull due to the success of valves. A working starship is at present more plausible than AI, because we have no idea at all how to sensibly do interstellar travel. We do know what the inherent limitations of computers are. If A.I. is possible, then it will be more likely some sort of dubiously ethical biological development, as envisaged by Cordwainer Smith (Paul Linebarger), not something based on computers. I remember in late 1980s some people that knew little about computers claiming we just needed more storage and processing power. Other people more expert (some who were selling Expert Systems, a thing now marketed as AI, though the "training" is rather different now) agreed that if AI was possible we could have built a slow one in the 1970s. There is a fundamental gap between how biological systems work and how computers work. "Neural Networks" is a marketing term. Nothing to do with how biological systems work. We might some day define intelligence sensibly, at the minute we don't actually have rigorous definitions of intelligence, sentience, self awareness. The mirror test is flawed for complicated reasons. It may give false positives and false negatives. However we do know EXACTLY how computers work and their limitations. Turing formalised and did a good general proof after WWII. But others from Chinese, Greek & Arab automata designers, to Ada Lovelace to Konrad Zuse (working computer before WWII) increasing saw the truth. Everything had to be designed in by the human in advance. The systems, even with feedback, are unbelievably "fragile". Rooks are lazy. You will not easily see any intelligent behaviour by them in the wild, except perhaps avoiding people carrying a shotgun. Despite spending a lot of time wandering car parks or roads, they don't get hit by cars as often as some other creatures. They seem to understand traffic lights. Put them in a lab and they are astounding. Yet their rook parents didn't teach them how to make "fishing hooks" or fill a jar with pebbles so the water rises and they can drink, or open padlocks with a key. All computers are simply very fast adding machines, with tests and branches. Preprogrammed by humans. Programs and so called "Machine Learning" are so fragile you'd not believe it. Also exactly how useful would real AI be? What could you use it for in a trustworthy way? Many current not actually AI systems called AI are appalling. The performance is often a marketing lie. Exactly how would facial recognition, self serve checkouts, cancer diagnosis, self driving cars be safer, more accurate and more trustworthy with real AI? Who would train the systems and verify them? Who would be responsible. We can't trust the big US companies currently marketing the dumb stupid AI, which is basically 1980s Expert Systems on Steroids. None of it is AI in the sense ANYONE from 1940 to 1990 defined A.I. That computers can do A.I. is one of the most pernicious and dangerous modern myths and it's being shamelessly exploited mainly by be USA Corporations that think they are above the law. The USA doesn't believe in proper regulation anyway. |
| Users browsing this thread: |
| 2 Guest(s) |