Michael J. Behe's Blog, page 530
February 2, 2019
Brendan Dixon: Artificial Intelligence Is Actually Superficial Intelligence

The confusing ways the word “intelligence” belie the differences between human intelligence and machine sophistication
We’ve known, for as long as we’ve had chess-playing programs, that computers do not play chess like we play chess. We’ve lost sight of that difference because recent artificial intelligence (AI) program advances have overcome higher hurdles than previous programs. Computers now win at dynamic strategy games, translate languages, analyze MRIs, and even recognize cats. These advances seem, on the surface, to convey the idea that more is going on than mere programming, that computers are living up to their designation as “intelligent” in the same sense as a human being.
But we should know better. And recent research into how the latest advances differ from human mental activity demonstrates that. More.
Brendan Dixon of the Biologic Institute is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Though he’s spent the majority of his career on other types of software, he’s remained engaged and interested in the field.
Also by Brendan Dixon: The Numbers Don’t Speak for Themselves The patterns uncovered by machine learning may reflect a larger reality or just a bias in gathering data Because Machine Learning is opaque—even experts cannot clearly explain how a system arrived at a conclusion—we treat it as magic. Therefore, we should mistrust the systems until proven innocent (and correct).
AI Winter Is Coming: Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.
and
The “Superintelligent AI” Myth: The problem that even the skeptical Deep Learning researcher left out
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
Adult human brains apparently do generate new nerve cells
Not a lot of them but it adds up over time:
Just last year, two opposing papers appeared in leading journals, one claiming firm evidence of ongoing neurogenesis in the adult human dentate gyrus, while the other study came to the opposite conclusion. The fact that adult neurogenesis is reliably seen in rodents only adds to the confusion. Neuroskeptic, “A New Look at Neurogenesis in Humans” at Discover Magazine
Neuroskeptic also quotes an authority (Snyder): “Spalding et al. estimated that only 0.004% of neurons are added each day in adult humans [10]. While this would appear negligible under the microscope (1 cell in 25,000), it translates to ∼15% over a decade; a sizable fraction…”
It will be interesting to see if these nerves are unusually sensitive to changes, for example, epigenetic changes.
See also: Mechanosensing and mechanotransduction: How cells touch their world
and
Epigenetic change: Lamarck, wake up, you’re wanted in the conference room!
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
Michael Egnor: How the internet turns coffee klatches into mobs

Here’s neurosurgeon Michael Egnor living up to his reputation as Darwinian evolutionary biologist Jerry Coyne’s “archenemy” (Coyne’s designation):
A philosopher sheds light on how the Covington high school kids became America’s Most Hated:
The internet provides two dynamics that inflame hatred and even violence: obscurity and contagion. By obscurity, I mean that the traditional “one-on-one” nature of personal attacks is circumvented by the anonymity of the internet. On the internet. you can personally attack someone without ever seeing them, knowing them, or being anywhere near them. You can attack people in a way that leads to violence against them without your own identity ever coming to light. The anonymity of the internet and the distance it creates between an attacker and his victim both lend an obscurity to the attack that is much more dangerous to the victim and much more desirable for the attacker. It is even possible to harm others unintentionally through the spread of errors and misunderstandings which are so common to internet communication.
This dynamic of obscurity was noted by military strategists involved in the development of artillery, which is the most lethal weapon on the modern battlefield. Part of the lethality of artillery — in addition to its inherent power to maim and kill — is its ability to maim and kill at a distance. More.
Michael Egnor is a neurosurgeon, professor of Neurological Surgery and Pediatrics and Director of Pediatric Neurosurgery, Neurological Surgery, Stonybrook School of Medicine
See also: Does brain stimulation research
challenge free will? (Michael Egnor)
Is free will a dangerous myth? (Michael Egnor)
and
Neurosurgeon Michael Egnor has become Darwinian Jerry Coyne’s “archenemy” Coyne has good taste in archenemies. It shouldn’t go unrewarded.
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
Neurosurgeon Michael Egnor has become Darwinian Jerry Coyne’s “archenemy”
If Darwinian evolutionary biologist Jerry Coyne is aiming for a stint at Marvel Comics, he needs an archenemy, of course. And we could always use the revenue around here from sales of Michael Egnor tee shirts and fridge magnets.
Coyne said, in response to science philosopher Massimo Pigliucci crabbing about him,
I’ve never considered Pigliucci my “archenemy” (I’d reserve that term for creationists like Michael Egnor), and I’m surprised that he sees me as his when we largely agree on most things. Jerry Coyne, “Massimo Pigliucci goes after “scientism” for the umpteenth time” at Why Evolution Is True

David Klinghoffer replied,
Archenemy sounds a little…obsessive. It’s also ironic considering that Coyne has repeatedly noted what he thinks is our “obsession” with him. He’s written, “Klinghoffer is absolutely obsessed with me”; “Discovery Institute has a bit of an obsession with me”; Egnor, “who’s obsessed with me, blames my views on ‘materialism and Darwinism’”; Egnor’s “obsession with me is regularly on parade at the Discovery Institute website Evolution News.” Obsessions on parade, I like that image. We’re not alone, either. Coyne believes Karl Giberson has a “weird obsession with me,” as does Robert Wright, who “is literally obsessed with me.”
Archenemy? No. Grand obsession? Not really. The truth is that the highly prolific Dr. Coyne provides comic relief with equal parts of real instruction about the inane conclusions that follow from his atheism, materialism, and Darwinism. He’s a teachable moment unto himself, a useful and entertaining one. David Klinghoffer, “Congratulations! Coyne Promotes Egnor to “Archenemy” at Evolution News and Science Today:
That’s true. Some readers may not have followed our reporting on Coyne suddenly discovering the pussyhats’ anti-Semitism and soon after recoiling from the discovery that Templeton is committed, as he is, to academic freedom.
Don’t cheat yourself. Read about it. It’s not every day that a truly dense person starts to de-densify like that. Most just go on getting denser.
Besides, he has good taste in archenemies. It shouldn’t go unrewarded.
See also: Women’s March Falling Apart Before The Anti-Semitism Gets To Science?
Jerry Coyne has another reason to be mad at Templeton
Does brain stimulation research
challenge free will? (Michael Egnor)
and
Is free will a dangerous myth? (Michael Egnor)
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.Plugin by Taragana
Reference: pregnancy, week by week:
Cf, here for pictures and descriptions. END
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
February 1, 2019
“If’n I Drop, I’m Gonna Be in Motion”
In a recent post I took umbrage with a writer who said :
“If determinism is also true, that does not mean that free will is false.”
Well, yes, it kinda does, because those two things — determined and free — are mutually exclusive. The whole thing put he in mind of a scene from one of my favorite movies, Raising Arizona. Enjoy.
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
We actually don’t know the precise value of the Hubble Constant

Which has an impact on end-of-the-universe scenarios:
Now, using gravitational wave signals from the merger of two black holes and redshift data from one of the most ambitious sky surveys ever conducted, researchers have developed an entirely new way to calculate the Hubble constant. They described the method in a study they submitted to The Astrophysical Journal Letters and posted on the preprint site arXiv on January 6. In it they report a value of 75.2 for the constant, albeit with a large margin of error (+39.5, –32.4, meaning the actual number could range up to 114.7 or go as low as 42.8). This large uncertainty reflects the fact the calculation comes from a single measurement, and thus does not yet help clear up the tension between the original two calculation methods. But as a proof of concept, the technique is groundbreaking. Only one other measurement, from October 2017, has attempted to calculate the Hubble constant using gravitational waves. Scientists hope future gravitational wave detections will help them improve the precision of their calculation.Jim Daley, “SPACE The Universe’s Fate Rests on the Hubble Constant—Which Has So Far Eluded Astronomers” at Scientific American
Thank goodness we were never in any danger of running out of end-of-the-universe/world/world-as-we-know-it scenarios anyway.
See also: New Findings: Discrepant Values In Universe’s Expansion Make Everything Murkier
and
Is Cosmology “In Crisis” Over How To Measure The Universe?
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
Does the war on cancer reveal limits to random mutation?

Darwinism, of course, claims that natural selection acting on random mutation produces all the ordered complexity we see around us. Yet, researchers who are not looking for evidence for that often behave as though they don’t think it’s true. Here’s a promising cancer treatment from Israel:
For starters, most anti-cancer drugs attack a specific target on or in the cancer cell, he explained. Inhibiting the target usually affects a physiological pathway that promotes cancer. Mutations in the targets – or downstream in their physiological pathways – could make the targets not relevant to the cancer nature of the cell, and hence the drug attacking it is rendered ineffective.
In contrast, MuTaTo is using a combination of several cancer-targeting peptides for each cancer cell at the same time, combined with a strong peptide toxin that would kill cancer cells specifically. By using at least three targeting peptides on the same structure with a strong toxin, Morad said, “we made sure that the treatment will not be affected by mutations; cancer cells can mutate in such a way that targeted receptors are dropped by the cancer.”
“The probability of having multiple mutations that would modify all targeted receptors simultaneously decreases dramatically with the number of targets used,” Morad continued. “The probability of having multiple mutations that would modify all targeted receptors simultaneously decreases dramatically with the number of targets used,” Morad continued. “Instead of attacking receptors one at a time, we attack receptors three at a time – not even cancer can mutate three receptors at the same time.” Maayan Jaffe-Hoffman, “A Cure for Cancer? Israeli Scientists May Have Found One” at The Jerusalem Post
Time will tell if their treatment works but note that actual numerical limits are suggested here on the number of mutations that can happen randomly at the same time. Mathematics, not religion, is the enemy of Darwinism.
See also: Researchers: A Kill Cancer Code Is Embedded In Every Cell
Cell behavior can show “purposeful inefficiency”? What next?
and
Do cells use passwords?
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.Plugin by Taragana
Attempting to define stupidity
But how does the proposed approach escape confirmation bias?
But what exactly is stupidity? David Krakauer, the President of the Santa Fe Institute, told interviewer Steve Paulson, for Nautilus, stupidity is not simply the opposite of intelligence. “Stupidity is using a rule where adding more data doesn’t improve your chances of getting [a problem] right,” Krakauer said. “In fact, it makes it more likely you’ll get it wrong.” Intelligence, on the other hand, is using a rule that allows you to solve complex problems with simple, elegant solutions. “Stupidity is a very interesting class of phenomena in human history, and it has to do with rule systems that have made it harder for us to arrive at the truth,” he said. Brian Gallagher, “The Case for Professors of Stupidity” at Nautilus
Krakauer notes that many people study intelligence but no one studies stupidity. That may have to do with the social awkwardness of explaining the research project to the subjects…
Cultural evolution theories “challenged” by multiple dwelling cave
Recent information shows that different groups occupied Denisova Cave between 300,000 and 210,000 years ago. It sounds as though they all might have made objects that require symbolic thinking:
The suggestion Denisovans developed the Upper Paleolithic artifacts at the site bears on a hot topic in paleoanthropology: the origins of modern behavior and cognition. Once upon a time, archaeologists thought only H. sapiens made symbolic items such as jewelry and advanced technology such as standardized bone tools. Then discoveries in the 1970s ignited debate over whether Neandertals also might have invented such items. In recent years evidence has mounted in support of a more sophisticated Neandertal. For instance, last year researchers reported cave paintings in Spain pre-date the arrival of H. sapiens to the region by thousands of years and must therefore be Neandertals’ handiwork. Neandertals, however, are not the only archaic hominin species to show signs of advanced cognition: In 2015 archaeologists unveiled their discovery of a shell that was engraved with a geometric design some 500,000 years ago—long before the origin of modern humans or Neandertals—the implication being that an earlier human ancestor known from this time period, Homo erectus, must have been the designer. Kate Wong, “Cave That Housed Neandertals and Denisovans Challenges View of Cultural Evolution” at Scientific American
This kind of find is treated as problematic because it means that the missing link is still missing. Nobody is the subhuman. That;s not good needs for a Darwinian aproach to human evolution, in which someone must be the subhuman.
See also: Neanderthals Were Way Smarter Hunters Than We Used To Think
and
In any Darwinian scheme, someone must be the subhuman. Otherwise, there is no beginning to human history.
Follow UD News at Twitter!
Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
Michael J. Behe's Blog
- Michael J. Behe's profile
- 219 followers
