05-11-2019, 09:56 AM
|
A.I.
|
|
05-11-2019, 10:35 AM
I heard it this morning. "Life Scientific" is one of the many gems on Radio 4. Hassabis made a good case for AI. He touched on difficult topics such as human interaction with AI and finding out how and why AI makes its decisions. This last is a very real problem, most AI systems are notoriously opaque about how they reach their results.
www.borinsky.co.uk Jeffrey Borinsky www.becg.tv
05-11-2019, 06:05 PM
Classical AI does not exist and may never exist. Modern so called AI has no intelligence. It's just a development of 1980s Expert Systems where the initial database (so called training) uses various methods to store the human curated initial data. It's very dependent on the judgement of the programmers and selection of "training" data. It's very fragile.
Almost everything in the media about AI is misleading. We still have no clear definition of real Intelligence. Nor understanding of how rooks, pigs, primates, dogs, horses etc do what they do. The AI paradox. Stuff that's simple for a two year old human is impossible for computers. Stuff that is impossible for people can be trivial for computers. There is no computer pattern recognition, or language translation, facial recognition. It's MATCHING captured data to human curated stored data. Some companies are using Computer "AI" interviews. It's probably illegal. It's also stupid and discriminatory.
05-11-2019, 07:47 PM
We had an AI computer monitoring our paper (making) machine. It would tell us what to do and the machine tender would do what needed to be done. Over time it was comical the advice it would give. I think it learned every possible wrong adjustment to make. It was good that the AI was only advisory. It was reading all the instruments on the machine so anytime the product got off spec old AI would start issuing instructions. The alarm would not stop until the machine tender pushed the acknowledge button which told AI the advice was received. When the problem was solved AI assumed its advice was good. (GIGO)
Bob applied for a job as a lab instrument repairman. The instrument flag popped up and he was hired to help me install a process control computer. It was not funny. He sold his house and moved 2000 miles to the job. He was a quick study. I gave him OJT and we got the job done. He went back to college and cross trained. The computer put his resume in the hand a human that didn't know the difference. The strangest I've heard was the instrument controls tech hired because he had fire control on his resume. The interviewer was thinking fire control was aiming the guns not putting out fires.
06-11-2019, 08:26 PM
It's worth noting that DeepMind is most certainly not an "expert system" or any sort of derivative thereof.
Demis Hassabis made it clear that DeepMind was simply given the rules of Go and it then played against itself for a few hours, determining winning strategies and move evaluations for itself. Likewise for chess: after being given just the basic rules of the game, the system learnt by playing against itself with no human intervention. Subsequent to each of these self-teaching exercises, DeepMind beat the respective world champions.
sıʌǝɹq ɐʇıʌ `ɐƃuol sɹɐ
ʞɔıu
06-11-2019, 09:00 PM
These developments are very clever, but games are typically between a small number of players with very clearly defined rules. For chess and go, they are also games with perfect information; there is nothing hidden. Bridge and poker are more interesting because there is incomplete information. I think that machines have made significant advances in poker.
As for curing the common cold or creating world peace, machines are nowhere near as good as humans. Not that humans are very good at these things either.
www.borinsky.co.uk Jeffrey Borinsky www.becg.tv
06-11-2019, 10:47 PM
(06-11-2019, 09:00 PM)ppppenguin Wrote: These developments are very clever, but games are typically between a small number of players with very clearly defined rules. For chess and go, they are also games with perfect information; there is nothing hidden. Bridge and poker are more interesting because there is incomplete information. I think that machines have made significant advances in poker. What is REALLY interesting is the use of neural nets and genuine AI in training systems to analyse mammograms, retinal scans and colonoscopy images. With mammograms, there is a preexisting MASSIVE data-set with known outcomes - you don't tell the systems what to look for, just here's an image and the outcome was positive or negative - the systems determines where the indicators lie. The results are so far extremely encouraging. The system is also being used to identify current false negatives & positives, again with encouraging results. The ability to handle Tomosynthesis data is also being added. Colon cancer is a notoriously difficult disease to diagnose - AI-based systems are proving more accurate with less missed positives and less false positives, just as in mammography. The retinal scan project is in conjunction with Moorfields Eye Hospital (who saved my sight) - again, using preexisting datasets of images and known outcomes to determine whether precursors of issues are present. In all these examples, the AI systems work with existing human consultants to aid diagnosis and treatment - they also help improve the quality of the consultants' skills by indicating where on the images the AI system has determined there's a possible issue. These are areas where AI (in it's current form) can really help. All positive.
sıʌǝɹq ɐʇıʌ `ɐƃuol sɹɐ
ʞɔıu
07-11-2019, 10:59 AM
(06-11-2019, 10:47 PM)Nick Wrote: What is REALLY interesting is the use of neural nets and genuine AI in training systems to analyse mammograms, retinal scans and colonoscopy images. It's simply pattern matching, with "training" images curated by humans. ZERO AI involved and it's not as good as experts and hyped by big tech companies. Google's parent Alphabet is involved so as to get personal information. UK Health trusts did dodgy deals. Too much reliance and in a while you have no experts. Then as natural biological processes drift the so called AI gets worse and there isn't the human expertise to examine patients or update the software. There is a more basic problem with self driving already discovered with Aircraft Auto Pilots. A Neural Net is another lying marketing term. Nothing to do with how real neural systems work. It's a data flow structured database, rather than a Relational type. The so called training is basically loading human curated data for the later pattern matching. They can be "fooled" easily and it's really hard except by testing with known input data to know accuracy. The so called rules are just the original human curated data and any added helper algorithms, all subject to the bias and errors of the people building the system. Expect to see more scandals like Therminos blood testing. IBM's Watson AI for medicine supposedly based on the SW that "won" Jeopardy was found to be a total waste of money in USA. The product has been quietly withdrawn. The Jeopardy win wasn't AI, it was a specialist interface to a really big database, applicable really only to building a better search engine. Except real search on Amazon and Google etc are polluted by trackers and paid adverts and have gone down in quality! The lies of AI marketing are examined in various books. Recently an Uber Taxi ran down and killed a woman wheeling a bicycle across a road. Since "Jay Walking" is illegal in USA, the programming only turned on pedestrian detection at designated crossings. An additional safety breaking was disabled because the car stopped too often. Tesla isn't allowed to call it's system anything other than "cruise control" in some countries. It's not a real autopilot or self driving system.
07-11-2019, 12:13 PM
I was referring to specific examples where humans do not provide any rules - none of those are expert systems or derivatives thereof.
Your Therminos example is irrelevant in this context as all the systems I referenced are academic studies and openly peer-reviewed in detail, unlike Therminos which was a commercial propriety confidence trick. Standford University are one of those running AI mammography trials, loads of universities and teaching hospitals are looking at AI in colon cancer and Moorfields, one of best (if not the leading) eye hospitals in Europe is leading the retinal study. It's easy to be cynical and certainly in industry claims of "AI" are wildly exaggerated, but in peer-reviewed academia it's very difficult to hide if you're bluffing! I guess we'll just have to agree to disagree
sıʌǝɹq ɐʇıʌ `ɐƃuol sɹɐ
ʞɔıu
07-11-2019, 03:49 PM
(07-11-2019, 12:13 PM)Nick Wrote: It's easy to be cynical and certainly in industry claims of "AI" are wildly exaggerated, but in peer-reviewed academia it's very difficult to hide if you're bluffing! I was not in this field so cannot comment well on AI but in the field I was in I did notice at times much fighting amongst front runners for the honours. In limited fields it does seem to me to be fairly easy to steamroll oposition as someone working on the limits can be difficult to challenge too strongly without putting yourself in an adverse position, as there will be few people competant to do so. In this way people in the know will be far more familiar with the limitations, or otherwise, of any particular method but will perhaps be careful about going against unless they have good backing. Therefore people out of the loop will only generally see the better bits unless something major fails. With AI it may be less the case as it is more mainstream, but, although never a lead researcher, I was able to see many ideas which were poor and eventually proved so (also the reverse too), but if you are a small cog? Tracy |
| Users browsing this thread: |
| 1 Guest(s) |








