HealthDay News, via MSN, reports:
The first new guidelines in 27 years for the diagnosis of Alzheimer’s disease could double the number of Americans defined as having the brain-robbing illness.
The guidelines, issued Tuesday by the Alzheimer’s Association and the U.S. National Institute of Aging, differ in two important ways from the last recommendations, which have been in use since 1984.
First, Alzheimer’s is now being recognized as a continuum of stages: Alzheimer’s itself with clear symptoms; mild cognitive impairment (MCI) with mild symptoms; and also the “preclinical” stage, when there are no symptoms but when recognizable brain changes may already be occurring.
Second, the new guidelines incorporate the use of so-called “biomarkers” — such as the levels of certain proteins in blood or spinal fluid — to diagnose the disease and assess its progress, but almost exclusively for research purposes only.
Wired.com reports that the Paul Allen Institute for Brain Science has completed a multi-year project to map the human brain – and make that map (actually a kind of virtual atlas) publicly available.
When asked why the Allen Institute undertook such an endeavor, CEO Allan Jones told Wired:
The Allen Institute operates on a different model than most research institutes, with a focus on creating catalytic resources for those other researchers around the world. Our mouse brain atlas, which was completed in 2006, has really proved to be an extraordinary resource for scientists and is used by approximately 10,000 unique users from around the globe every month. It represents for researchers a reference for new discovery, hypothesis generation and confirmation of their own data, and often saves them from having to do an experiment themselves in the lab, which it turn saves time and money.
The New York Times reports:
Thanks to advances in artificial intelligence, “e-discovery” software can analyze documents in a fraction of the time for a fraction of the cost [relative to human analysis]…
Some programs go beyond just finding documents with relevant terms at computer speeds. They can extract relevant concepts — like documents relevant to social protest in the Middle East — even in the absence of specific terms, and deduce patterns of behavior that would have eluded lawyers examining millions of documents…
Computers are getting better at mimicking human reasoning — as viewers of “Jeopardy!” found out when they saw Watson beat its human opponents — and they are claiming work once done by people in high-paying professions.
SRI International, based in Menlo Park, Califorinia, is a non-profit contract research institute founded in 1946 by Stanford University and spun off to become an independent entity in 1970. SRI has done pioneering work in various fields, including artificial intelligence and human-computer interaction.
In this article for TechCrunch.com, Robert Scoble shares video footage of a recent visit he made to SRI, where he had an opportunity to see SRI researchers’ work on ‘augmented reality,’ including new haptic feedback interfaces and speech translation systems. It’s a fascinating look at some cutting-edge research of interest to cognitive scientists.
Note: This is cross-posted with my Education Blog
USA Today reports:
At the Fourth International Conference on Writing Research, the Educational Testing Service presented evidence that a pilot test of automated grading of freshman writing placement tests at the New Jersey Institute of Technology showed that computer programs can be trusted with the job…
But a writing scholar at the Massachusetts Institute of Technology presented research questioning the ETS findings, and arguing that the testing service’s formula for automated essay grading favors verbosity over originality. Further, the critique suggested that ETS was able to get good results only because it tested short answer essays with limited time for students…
The BBC reports:
Monkeys trained to play computer games have helped to show that it is not just humans that feel self-doubt and uncertainty, a study says.
US-based scientists found that macaques will “pass” rather than risk choosing the wrong answer in a brainteaser task.
Awareness of our own thinking was believed to be a uniquely human trait.
But the study, presented at the AAAS meeting in Washington DC, suggests that our more primitive primate relatives are capable of such self-awareness.
The NY Times reports:
In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.
Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.
For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
Now, as the pace of technological change continues to accelerate, it has become increasingly possible to design computing systems that enhance the human experience, or now — in a growing number of cases — completely dispense with it.
Thanks to my friend and fellow Ph.D. candidate Lee Becker, I have an update on my earlier post regarding Daryl Bem’s ESP paper.
On his blog, Columbia University statistician Andrew Gelman writes:
All the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero–who am I to say?–just that the comments about the high-quality statistics in the article don’t say much to me…
As David Weakiem and I have discussed, classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects.
I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics.
To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a large, robust phenomenon, little statistical errors won’t be so damaging as in a study of a fragile, possibly zero effect.