Steven Novella's Blog
October 14, 2025
New Physics Discovered in Metal Manufacturing
I attended a Ren Faire this past weekend, as I do most falls, and saw a forging demonstration. The cheeky blacksmith, staying in character the whole time, predicted that steel technology was so revolutionary and so useful that it would still be in wide use in the far future year of 2025. It is interesting to reflect on why, and to what extent, this is true. Once we figured out how to make steel both hard and strong it became difficult to beat it as an ideal material for many applications. SpaceX (a symbol of modern technology), in fact, builds its Starship rockets out of stainless steel.
However, steel technology has advanced quite a bit. The process of hardening and strengthening steel has been perfected. Further, there are many alloys of steel, made by mixing small amounts of other metals. It is difficult to say how many alloys of steel exist, but the World Steel Association estimates there are 3,500 grades of steel in use (a grade includes the specific alloy, production method, and heat treatments). Each grade of steel is tweaked to optimize its features for its specific application – including hardness, strength, heat toleration, radiation tolerance, resistance to rusting, ductility, springiness, and other features.
Steel is so versatile and useful that basic science research continues to explore every nuanced aspect of this material, trying to find new ways to alter and optimize its properties. One relatively recent advance is “superalloys” – which use complex alloy compositions in addition to highly controlled microstructures. Essentially, material scientists are finding very specific alloy ratios and manufacturing processes to create specific microstructures that have extreme properties. And of course, AI is being used to speed up the process of finding these specific superalloy formulas.
All of this is why I find it interesting that material scientists have discovered something very specific, but new, about how steel behaves. Without this context this may seem like a giant “so what” kind of finding, interesting only to metal nerds, but this kind of finding may point the way to future superalloys with even superior properties.
What they found is that steel alloys are not truly randomized even after extensive manufacturing. Again, it is not immediately obvious why this is interesting, but it is because this finding was totally unexpected. When you manufacture steel, at some point any structure in the steel has been completely randomized, also described as being at equilibrium. Think of this like shuffling a deck of cards – with enough shuffles, you should have a statistically random deck. Imagine if you shuffled a deck of cards far beyond the full randomization point, but then found that there was still some non-random arrangement of cards in the deck. Hmm…something must be going on. Probably you would suspect cheating. When the material scientists found essentially the same phenomenon in steel, however, they did not suspect cheating – they suspected that some previously unknown process was at work.
Through modeling and further experimentation they determined that small defects in the metal were not being broken up at random. There was a non-random aspect to the order in which these tiny structures were breaking apart. The researchers think it is because slightly weaker bonds were more likely to break before slightly stronger bonds, so the bonds creating the small defect structures were not being randomized but were maintaining some non-random order (a so-called far from equilibrium state), even after thorough processing. This phenomenon also goes beyond steel to any metal.
It remains to be seen how useful this new property will turn out to be. The next step is for researchers to create a map of these non-random structures, the manufacturing processes that create them, and the effect they have on the properties of the metal. The more thorough this map becomes, the more metallurgists will be able to use this phenomenon to tune the properties of metal alloys. One early target is metals used as catalysts. Catalysts make chemical reactions go faster (often orders of magnitude faster – from a practical point of view, from not happening at all to happening robustly). Catalysts are critical for manufacturing processes, and finding the right catalyst often makes the difference between a technology working or not working. Having a map that allows for tuning this property could prove extremely useful.
This is definitely an incremental advance at best in metallurgy, but could prove extremely valuable, even if just a few new catalysts emerge from this process. Every incremental advance we make in material science changes the game of technology, making new applications possible.
This is why (if you will forgive a nerd tangent) science fiction almost always includes some fantasy materials. In order the explain the fantastical technology we see or read about, equally fantastical materials are necessary. This is why there is adamantium, unobtanium, mithril, and vibranium. Even in hard science fiction, like the recent Project Hail Mary, the aliens had xenonite, a super material made from xenon.
Unfortunately, none of these science fiction materials exist or are likely to exist. There are no new stable elements on the periodic table (newly discovered elements are inherently unstable, surviving for only fractions of a second). This means we need to find new alloys, or compounds, or we need to change the properties of material by controlling their micro and nanostructure. This will create some “super” materials, but likely nothing like the best materials of science fiction. Although I do hold out hope that there is some super material out there, waiting to be discovered.
My wish list includes a material with extreme radiation tolerance and blocking ability (without requiring extreme volume or mass) – something that can absorb or block high energy cosmic rays. This would make space travel much easier and safer. Without it, human space travel is likely to be extremely limited.
The post New Physics Discovered in Metal Manufacturing first appeared on NeuroLogica Blog.
October 6, 2025
Using Sound to Modulate the Brain
The technique is called holographic transcranial ultrasound neuromodulation – which sounds like a mouthful but just means using multiple sound waves in the ultrasonic frequency to affect brain function. Most people know about ultrasound as an imaging technique, used, for example, to image fetuses while still in the womb. But ultrasound has other applications as well.
Sound wave are just another form of directed energy, and that energy can be used not only to image things but to affect them. In higher intensity they can heat tissue and break up objects through vibration. Ultrasound has been approved to treat tumor by heating and killing them, or to break up kidney stones. Ultrasound can also affect brain function, but this has proven very challenging.
The problem with ultrasonic neuromodulation is that low intensity waves have no effect, while high intensity waves cause tissue damage through heating. There does not appear to be a window where brain function can be safely modulated. However, a new study may change that.
The researchers are developing what they call holographic ultrasound neuromodulation – they use many simultaneous ultrasound origin points that cause areas of constructive and destructive interference in the brain, which means there will be locations where the intensity of the ultrasound will be much higher. The goal is to activate or inhibit many different points in a brain network simultaneously. By doing this they hope to affect the activity of the network as a whole at low enough intensity to be safe for the brain.
The study was essentially a proof of concept, just showing that the technique works without looking at any specific clinical outcome. And they found that it works (I probably would not be writing about it if it didn’t.) They were able to affect brain function with an ultrasound intensity one order of magnitude lower than usual. That is a game-changer for this technology.
The ultrasound works primarily by affecting the proteins that are part of ion channels on the surface of neurons. These are pores that control the flow of ions, like sodium, potassium, chloride, and calcium, from inside to outside the cell, and vice versa. Many of these pores are dynamic, changing in response to local conditions. For example, a voltage-gated sodium channel will open or close in response to local electrical activity. Altering the function of these ion channels can make a neuron more or less active, causing it to spontaneously fire or keep it from firing. In this way the researchers hope to either activate or inhibit entire networks within the brain.
There are two main types of uses for this technology. The first is to study brain function. If we can turn off or activate a brain network, then we can determine what that network does. It also helps us confirm what the networks in the brain are. This technology is perhaps best suited for this because it can simultaneously image the network and affect its function.
We can currently do this with electrical or magnetic stimulation. These use a very similar principle, tuning fields of electromagnetic energy so that they either stimulate or inhibit specific points in the brain. My searches indicate that the holographic technique has not been applied so far to, for example, transcranial magnetic stimulation. I wonder if this technique will translate to that technology. It remains to be seen if the ultrasound technique will be as good or superior to magnetic or electrical stimulation. It also has the potential advantage of simultaneously imaging the network, which should make research easier.
The other potential application is to affect brain function clinically, to treat a disease. The low hanging fruit for such neuromodulation is inhibiting seizures or suppressing tremors (which are already targets of neuromodulation). Other applications require a bit more sophisticated knowledge of brain networks, which is why this research helps advance clinical applications. Other targets of neuromodulation with positive effects include depression, anxiety, and substance use. There is also a lot of early work in conditions like migraines.
Even if neuromodulation does not expand much beyond these early targets, it can prove to be a very useful treatment option, especially for those who do not respond to medication or cannot tolerate the side effects. But the real promise of neuromodulation is that there is no theoretical limit to its potential applications.
Pharmacological therapy has a theoretical limit – we can only affect receptors or therapeutic targets that already exist within the brain, and drugs cannot be more targeted than the receptors themselves. In other words, if we want to increase the activity of dopamine using a drug, we can affect dopamine activity in several way, but dopamine is used in many different parts of the brain for many functions. We cannot specifically target only the one dopamine function we want. We have discovered dopamine subtypes, so we can get somewhat selective, but only as much as the existing biology allows.
With neuromodulation there is no such limit. We can activate or inhibit any network in the brain. The only limit is our knowledge of brain function and the precision of our technology. Neuromodulation techniques are all tunable already – we have a lot of control over their frequency, intensity, and location. The holographic technique gives us even more precision.
As the technology develops we will likely be able to alter brain function in more subtle and sophisticated ways. This could have profound implications for all psychiatric conditions, pain, and many neurological conditions. We could develop drugless anesthesia.
I do wonder how precise the transcranial techniques could get. Could this evolve into a non-invasive form of not just neuromodulation but brain-machine interface? Could one day someone put on a cap with hundreds of ultrasound producers, and that cap function as a neural interface? And since networks can also be imaged, this could theoretically function for two-way communication. It could then be used for any form of brain-machine interface, such as controlling robotic prosthetics. It could also function as a form of “neuro-reality” – like virtual reality but instead of putting on goggles the computer directly communicates to your brain. I would love to see where this technology will be in 100 or 500 years.
It is sad that in the short term this research is being slowed by the short-sighted political imperatives of our current administration. This research is largely funded by the NIH. It is also a result of an international collaboration of scientists from different institutions (as much of high-level neurological research is). As one of the authors notes:
“This study and the collaboration with researchers from New York University were principally financed by the United States National Institutes of Health. As this agency is currently under political pressure and is no longer awarding funding to international research partners, it is presently not possible for the researchers to continue their collaboration within the same framework.”
The post Using Sound to Modulate the Brain first appeared on NeuroLogica Blog.
September 29, 2025
Creatures of Habit
We are all familiar with the notion of “being on autopilot” – the tendency to initiate and even execute behaviors out of pure habit rather than conscious decision-making. When I shower in the morning I go through roughly the identical sequence of behaviors, while my mind is mostly elsewhere. If I am driving to a familiar location the word “autopilot” seems especially apt, as I can execute the drive with little thought. Of course, sometimes this leads me to taking my most common route by habit even when I intend to go somewhere else. You can, of course, override the habit through conscious effort.
That last word – effort – is likely key. Psychologists have found that humans have a tendency to maximize efficiency, which is another way of saying that we prioritize laziness. Being lazy sounds like a vice, but evolutionarily it probably is about not wasting energy. Animals, for example, tend to be active only as much as is absolutely necessary for survival, but we tend to see their laziness as conserving precious energy.
We developed for conservation of mental energy as well. We are not using all of our conscious thought and attention to do everyday activities, like walking. Some activities (breathing-walking) are so critical that there are specialized circuits in the brain for executing them. Other activities are voluntary or situation, like shooting baskets, but may still be important to us, so there is a neurological mechanism for learning these behaviors. The more we do them, the more subconscious and automatic they become. Sometimes we call this “muscle memory” but it’s really mostly in the brain, particularly the cerebellum. This is critical for mental efficiency. It also allows us to do one common task that we have “automated” while using our conscious brain power to do something else more important.
The question is – how much of the time are out actions instigated and executed by habit vs conscious choice, and how much do our habits align with our conscious goals and intentions?” A recent study set out to address this question. Participants (105) filled out an assessment 6 times a day for 7 days, indicating for the action they were currently engaged in whether it was initiated by habit or conscious choice, executed by habit or conscious choice, and whether it aligned with their goals. This is a smallish study, so the results need to be taken with care, but it does give us a window into this question.
They distinguished between initiating and executing, and say this is the first study to do so. By initiating they mean that habit triggered the behavior. By executing they mean that habit “facilitated the smooth execution of the behavior”. So my showering routine is both triggered and executed by habit. Writing this blog this morning is triggered by habit, but executed with conscious effort (I hope). Or I may decide to run an errand, without any habitual trigger, but then execute it mostly by habit. Aligning with their goals refers to whether or not the behavior is something they consciously want to do. For example, many people have “bad” habits, like smoking or eating a big desert, that do not align with their goals but they do out of habit. While other behaviors, like exercise, may definitely align with their goals.
They found in this study that: “Most observed behaviors were habitually instigated (65%), habitually executed (88%), and aligned with intention (76%). Whether a person’s behavior was generally habitual or aligned with intention did not vary as a function of demographics. Exercise behaviors were more commonly habitually instigated, and less habitually executed, than other action types.”
So about two-thirds of behaviors are initiated out of habit (according to this self-report), while most actions, 88%, are carried out through mostly habitual behaviors. This study suggests we are largely creatures of habit. What are the practical implications of this, if any?
The authors suggest, and I think this is reasonable, that this may inform our attempts at changing our behaviors. For example, if we want to stop a bad habit, like smoking, we may need to identify and remove the habitual triggers. Relying on conscious effort is often a losing strategy, because it takes so much mental effort, which goes against our evolved tendency to minimize our mental effort. We tend to resort to habit. But if you realign or day to minimize habitual triggers, that reduces the mental effort necessary to avoid the bad behavior. Similarly, if you want to more consistently execute a desired behavior, like exercising, then you will want to tie that behavior to a common trigger. Make it part of your routine, rather than doing it “when you have the time”.
This also brings up the fact that not all behaviors were the same in terms of their percentages. Exercising was more habitually triggered than other behaviors, which means we were less likely to spontaneously decide to exercise. We needed the habitual trigger.
These finding are not radical, and mostly align with prior research. They are just one way of putting numbers on what psychologists have already identified. It does appear to be good advice, and aligns well with my life experience – if you want to do something regularly, then make it part of your routine. If you want to change your behavior, then change your routine. Do not rely on frequent conscious mental effort. That almost never works. That is simply not how our brains work. Some people can do it, if they are very highly motivated. But we can’t just talk ourselves into that motivation. Rationalizing our way to good behavior or out of bad behavior moves the needle only a tiny amount. The best strategy is to make the desired behaviors as easy as possible – by lowering any barriers to the behavior as much as possible and habitually triggering them. If we want to avoid undesirable behavior we need to do the opposite, create as many barriers as possible and remove any habitual triggers. This may take some up-front mental energy (ironically), but pays off in the long run.
The post Creatures of Habit first appeared on NeuroLogica Blog.
September 23, 2025
Trump is not a Doctor, But He Plays One as President
Yesterday, Trump and RFK Jr had a press conference which some are characterizing as the absolutely worst firehose of medical misinformation coming from the White House in American history. I think that is fair. This was the presser we knew was coming, and many of us were dreading. It was worse than I anticipated.
I suspect much of this stems from RFKs previous promise that in six months he would find the cause of autism so that we can start eliminating these exposures – six months is September. This was an absurd claim given that there has been and continues to be extensive international research into autism for decades, and absolutely no reason to suspect any major breakthrough in those six months. Those of us following RFK’s career knew what he meant – he believes he already knows the causes, that they are environmental (hence “exposures”) and include vaccines.
So Kennedy had to gin up some big autism announcement this month, and there is always plenty of preliminary or inconclusive research going on that you can cherry pick to support some preexisting narrative. It was basically leaked that his target was going to be an alleged link between Tylenol (acetaminophen) use in pregnancy and autism. This gave us an opportunity to pre-debunk this claim, which many did. Just read my linked article in SBM to review the evidence – bottom line, there is no established cause and effect and two really good reasons to doubt one exists: lack of a dose response curve, and when you control for genetics, any association vanishes.
But Trump had a different take:
The president on Monday repeatedly issued strong warnings that flew in the face of the recommendations of leading medical groups: “Don’t take Tylenol. Don’t take it. Fight like hell not to take it.” He urged pregnant women to “tough it out” when in pain, except in rare instances, such as a dangerously high fever.
He also said he saw no downside to not taking it. But Trump, as he acknowledged, is not a doctor, which is why he shouldn’t be giving medical advice from the White House lectern. There are potential downsides to not treating pain or fever in a pregnancy. Such decisions are always a matter of risk vs benefit, but not-experts tend to see only risk, and fail to consider the risk of not treating. Standard medical practice is already to use Tylenol (and any other drug) during pregnancy only when needed, in minimal dose and for the shortest duration. But it is important to treat pain and fever, which can present risks to the pregnancy as well. It’s a balance, which takes medical knowledge and clinical judgement (neither of which Trump or RFK have).
They also wanted to use the news conferenced to announce a newly FDA approved treatment for autism – leucovorin. Taken in isolation, this is just weird – why would the president and HHS secretary make such an announcement? Obviously this is just propaganda, trying to create this impression that they are doing something about autism, even though neither of them or their policies have anything to do with this. Leucovorin may be a legitimate treatment, but only for a small subset of those with autism who have a folate metabolism disorder. It’s really a treatment for cerebral folate deficiency, one of the manifestations of which may be autism.
But more concerning – the FDA just gave approval for leucovorin for CFD without any new evidence. There was no new study, just a review of prior studies. The evidence is generally considered to be preliminary. All of this makes the timing – right before RFKs big September deadline – highly suspicious. Others have also noted that Dr. Oz, who was standing with Trump and RFK, holds a position in a company that produces leucovorin, but this can be a coincidence.
It’s pretty obvious what is going on here. RFK needed something to announce in September regarding autism, so he or his flunkies scraped the literature for anything they can present as an identified exposure risk and a possible treatment. Nothing else really explains the timing of these two announcements, especially given that the exposure (while more research is always nice) is largely debunked, and the treatment is narrow and preliminary.
But all of this is likely not the worst of it. Trump, apparently, decided to have one of his off-script rambling riff sessions – and his target, to the horror of the medical community, was vaccines. He said, “This is based on what I feel,” and “They pump so much stuff into those beautiful little babies, it’s a disgrace.” He went on to repeat anti-vaccine talking points, specifically that the MMR vaccine is not safe and that it should be split up into individual shots, that children get too may vaccines too quickly and they should be spaced out, and again suggested a link between vaccines and autism.
All of this is debunked nonsense. This is a pattern with Trump and a reason for extreme concern – he feels he can substitute his gut feelings for the consensus opinion of experts. His advice, if followed, is dangerous. There are good reasons why the vaccine schedule is what it is (again, it is the optimal evidence-based balance of risk vs benefit). Delaying vaccines leaves children susceptible to infectious disease for longer. There is currently no substitute for the MMR in the US, and no evidence for any increased risk of the combined vaccine. His advice, given with the bully pulpit of the presidency, is dangerous pseudoscience.
The post Trump is not a Doctor, But He Plays One as President first appeared on NeuroLogica Blog.
September 22, 2025
Scalable Quantum Computer
Quantum computers are a significant challenge for science communicators for a few reasons. One, of course, is that they involve quantum mechanics, which is not intuitive. It’s also difficult to understand why they represent a potential benefit for computing. But even with those technical challenges aside – I find it tricky to strike the optimal balance of optimism and skepticism. How likely are quantum computers, anyway. How much of what we hear is just hype? (There is a similar challenge with discussing AI.)
So I want to discuss what to me sounds like a genuine breakthrough in quantum computing. But I have to caveat this by saying that only true experts really know how much closer this brings us to large scale practical quantum computers, and even they are probably not sure. There are still too many unknowns. But the recent advance is interesting in any case, and I hope it’s as good as it sounds.
For background, quantum computers are different than classical computers in that they store information and do calculations using quantum effects. A classical computer stores information as bits, a binary piece of data, like a 1 or 0. This can be encoded in any physical system that has two states and can switch between those states, and can be connected together in a circuit. A quantum computer, rather, uses qbits, which are in a superposition of 1 and 0, and are entangled with other qbits. This is the messy quantum mechanics I referred to.
For this news item, the thing you most need to understand is entanglement. Two quantum systems are entangled if their states depend upon each other. So, for example, two particles might have entangled spin where if one particle is spin up the other must be spin down. One way to think of this is that their spins must cancel each other out. One of the challenges for quantum computers is that these entangled states are very fragile. They require very low energy (meaning cold) and very isolated systems, with very little noise. Any contact with matter or energy outside the entangled system causes the quantum state to break down. For this reason entangled states like this in qubits tend to be very short lived.
For a quantum computer to work, the different qubits must be entangled but also able to communicate with each other – they have to be physically connected in some way. This creates a trade-off. The more isolated a qubit is, the less noise and therefore the longer the entanglement will last and the fewer errors will occur, but the less able it will be to communicate to other qubits. This means they have to be very close together in an isolated system. This is a very practical problem for quantum computers because it limits the size and scalability of the networks of qubits that can be created.
This is where the breakthrough comes in, as researchers have found a way to keep a quantum system isolated enough to maintain entanglement and minimize noise while simultaneously giving it the ability to communicate not only over longer distances, but theoretically scalable distances. This is how it works.
The researchers used the spin of phosphorous nuclei as their qbit. Nuclear spin has so far proven to be one of the best quantum computer media because they are very stable – it is a “clean” quantum system. They have previously demonstrated that they can maintain entanglement for 30 seconds (a relatively very long time for such systems) with <1% errors, which is also very good for a quantum system. The problem has been getting these atomic nuclei to communicate with each other. What they did is surround two phosphorus nuclei with a single electron, and this enables to two nuclei to communicate with each other. They were able to do this over a distance of 20 nanometers, which sounds like a short distance but this is the scale of modern silicon chips. So theoretically we could use existing manufacturing techniques to make quantum computer chips using this new system.
If this all pans out, it would be a massive breakthrough for quantum computers. This is because this system is scalable – you can keep adding phosphorus nuclei and connecting them together with other nuclei using this shared electron technique. The researchers say that using their technique they should be able to control the shape of the electron cloud, increasing the distance between nuclei, and including more than two nuclei together. The best case scenario is that this system leads to the ability to mass produce quantum computer chips with many qbits linked together.
These systems still won’t be on your desktop. They require expensive systems and supercooling. Quantum computers will be used by governments, corporations, and large institutions, not consumers. What can they do?
Quantum computers do not do the same kinds of calculations that classical computers do, they can do a different kind of calculation. For certain kinds of problems this means that they can accomplish in seconds or minutes what would theoretically take a classical computer billions of years to complete. This can be used for things like modeling complex systems, like the climate. This could be a boon for researchers in many fields.
But perhaps the application which gets the most interest, and the reason why governments are so interested in funding research, is quantum encryption. Once someone has a powerful-enough and accurate-enough quantum computer, every encryption system in the world is suddenly obsolete. Encryption systems generally use sufficiently complex codes that classical computers could not crack them in any usable time frame. But a quantum computer might be able to crack the classical encryption codes in seconds, rendering them useless.
The solution is to use quantum encryption – you need your own quantum computer to make the encryption so that other quantum computers cannot crack it. So yes, we are in the middle of yet another technological arms race (Mr. President, we cannot allow a quantum computing gap!), with several countries racing to build quantum computers so that they will not suddenly become vulnerable to other countries with quantum computers. This is a very real security issues. Imagine if a hostile country had access to our military secrets, or could cause havoc in our energy grids. Obviously there are other ways to secure systems from hacking, which we do, but the ability to crack any encryption would still be a problem.
However this pans out, the physics here are interesting. For now, I am keeping my eye on quantum computing. It is probably a lot further away than it sounds from many of the news items, but at the same time we do appear to be making steady process. This advance does sound significant, and we’ll just have to wait and see what happens.
The post Scalable Quantum Computer first appeared on NeuroLogica Blog.
September 15, 2025
The New Crank Assault on Scientists
This is not really anything new, but it is taking on a new scope. The WSJ recently wrote about The Rise of ‘Conspiracy Physics’ (hat tip to “Quasiparticular” for pointing to this in the topic suggestions), which discusses the popularity of social media influencers who claim there is a vast conspiracy among academic physicists. Back in the before time (pre world wide web), if you were a crank – someone who thinks they understand science better than they do and that they have revolutionized science without ever checking their ideas with actual experts – you would likely mail your manifesto to random academics hoping to get their attention. The academics would almost universally take the hundreds of pages full of mostly nonsense and place it in the circular file, unread. I myself have received several of these (although I usually did read them, at least in part, for skeptical fodder).
With the web, cranks had another outlet. They could post their manifesto on a homemade web page and try to get attention there. The classic example of this was the “Time Cube” – the site is now inactive but you can see a capture on the wayback machine. This site came to typify the typical format of such pages – a long vertical scrawl, almost unreadable color scheme, filled with boasting about how brilliant the creator is, and claiming a conspiracy of silence among scientists.
With web 2.0 and social media, the cranks adapted, and they have continued to adapt as social media and society evolves. Today, as pointed out in the WSJ article, there is a wave of anti-establishment sentiment, and the cranks are riding this wave. If you read the comments to the WSJ article you will see evidence of some of the contributing factors. There is, for example, a lot of “blame the victim” sentiment – blaming physicists, or scientists, academics, experts in general. They did not do a good enough job of explaining their field to the public. They ignored the cranks and let them flourish. They responded to the cranks and gave them attention. They are too closed to fringe ideas that challenge their authority.
It’s fine to look for systemic issues that might have contributed to a problem. But I reject taking this to the point of blaming those factors as if they were the primary cause, rather than looking at the actual perpetrators and enablers. For example, one theme that crops up on the comments is global warming. Several commenters state that, well if scientists can perpetrate the fraud of global warming, it’s no wonder the public doesn’t trust them. They are on the right track, but making the wrong diagnosis.
The problem is the well-funded and deliberate campaign of attacking science and scientists, sowing distrust in institutions and scientists, in order to manufacture doubt and confusion over climate science. This then spreads to all areas of science and expertise. Political radicals then learned to use this playbook for any issue – just attack the elites and the experts, so that you can slip in your preferred narrative.
As with many things, there is a lot of nuance here and there is a spectrum of quality. Of course, with any large institution, there are legitimate criticisms and problems. People are people, and they bring their flaws and biases everywhere. The conduct of science is never pristine. Individual institutions, or labs, or researchers may have issues. In hindsight, entire disciplines may have emphasized the wrong theories or ignored viable alternatives. Can you think of a system that is so air-tight that nothing like this would ever happen? Also, keep in mind, science and scholarship are really hard. I would also argue they are getting harder. In many ways we have picked all the low hanging fruit, and further progress is getting trickier in many areas.
Physics is a perfect example of this. I love hearing people criticize current physicists because they have yet to unify relativity with quantum mechanics, or more generally that they don’t seem to be making much progress. Why don’t we have a new fundamental theory of reality every second Tuesday? You have to realize how much physicists have accomplished in the last century and a half – they figured out the solutions that were plaguing classical physics, identified the fundamental units of matter, came up with new descriptions of space time, have a working theory on the origin of the universe, and produced the standard model of particle physics (that’s all). Now they are trying to push these theories deeper – to come up with a grand unifying theory of all of reality. Cut them a little freaking slack.
Into this situation, armed with conspiracy theories and a TikTok account, comes an army of Time Cube cranks. They think they can shoot from the hip, without doing all the hard work required to make even a minor contribution to the collective scientific effort, and then whine and moan that they are not being taken seriously by people who actually have some understanding of this massive complexity. It must be a conspiracy. It can’t be me. They are given attention by the Joe Rogens of the world. They are applauded by the political ideologues who see this as yet another way to attack the establishment (even when they are a part of it), and sow distrust. There are no facts, there is no shared reality. There are just competing narratives – so listen to my narrative which says all your problems are because you are a victim of a vast and deep conspiracy. Oh yeah, and my tribe’s religious beliefs are all true.
This leads us to the notion that NASA is constantly lying to hide the flat Earth (because reasons). “They” are hiding the reality that all our modern culture derives from Tartaria, which was wiped out just 150 years ago in a global mud flood (that’s actually a real thing). They are also hiding an army of Einsteins who have actually already solves many of the most tricky problems facing science today.
The reality is quite the opposite of what these cranks claim. First, science and academia, while they may be institutionally conservative, tend to be open to new ideas as well – if you have the goods. But also – science is not monolithic. Each lab, each institution, each country has their own perspectives and biases. They are all in competition with each other. There are thousands of scientific journals. The problem is not that fringe ideas have no outlets – its that fringe ideas have too many outlets. We are being flooded with lots of low-quality science, and every whacky idea you can imagine. So what are they complaining about – that they can’t get published in a top tier journal? Welcome to the club. Almost no one can – that’s why they are top tier. That’s like a local (and solidly mediocre) rock band complaining that they cannot get booked at elite venues. It must be a conspiracy.
The post The New Crank Assault on Scientists first appeared on NeuroLogica Blog.
September 8, 2025
Upcycling Plastic and Reducing Mineral Waste
It is becoming increasingly clear, in my opinion, that we need to further shift from an overall economic system based on a linear model of extraction-manufacture-use-waste to a more circular model where as much waste as possible becomes feedstock for another manufacturing process. It also seems clear, after reading about such things for a long time, that economics ultimately drives such decisions. If the one-way road to waste is the cheapest pathway, that is the path industry will take. Unfortunately, this model historically has lead to massive pollution, growing waste, and a changing climate. How do we switch to an economically viable circular economy, to minimize waste and environmental impact without decreasing standard of living? That is always the $64,000 question.
Here are two recent possibilities I came across. They have nothing to do with each other, but both represent possible ways to think differently about our priorities. The first one has to do with mineral extraction. The method currently used for developing mines and refining metals is driven entirely by economics. The percentage of a metal in ore that is deemed worth refining depends entirely on the value of that metal. Precious metals like gold may be refined from ore with as little as 0.001%. Copper ore typically has 0.6% copper. High grade iron ore has the highest percentage at about 50%. Any mineral present at too low a concentration to be economically viable is considered a by-product, and simply becomes part of the ore waste.
The industry has largely evolved to pick the low-hanging fruit – find high grade ores for specific metals, perhaps recover some high value lower concentration metals, and the rest is waste. To meet growing needs, new mines are opened. Right now the US has 75 “hard rock” mines in operation. Opening a new mine, assuming a site with high grade ore is identified, takes on average 18 years and costs up to a billion dollars. Mining waste also has to be managed. This is not just rocks that can be dumped anywhere, ore is often crushed into a powder to be refined. Properly disposing of the waste (which is a complex issue – see here) can also be costly and have environmental impact. Further, as we deplete high-grade ore, new mines often go after lower and lower quality ore, with more waste.
The new proposal is to extract more and more minerals from existing ore rather than just opening up more mines. If the US, for example, increased our mineral extraction per ore by 1% that could significantly reduce our dependence on foreign minerals. According to the study, if we increased it to 90% we would be mineral independent with existing mines (this may not be realistic, but is based simply on what ores are present). What would this take?
The authors recommend that first we do a more detailed survey of which minerals are present in which existing ore. We also need to invest in research and development of commercially viable refining techniques that will extract even low concentration minerals from ore. This is where governments can play a role – shifting the economics from a process resulting in maximal waste to one resulting in minimal waste. They can invest in R&D, they can also provide economic incentives to invest in improved refining facilities rather than opening new mines (not to imply we will never open new mines if it makes sense to do so). That’s the carrot approach. They may also apply the stick of taxing companies for the environmental impact of their mining activity.
Also – economics will shift by themselves. As demand for minerals increases, this makes it more commercially viable to extract minerals from lower quality ore, which can be done by opening up new mines (which we do not want) or to refine more metals from existing ore (which we do want). Ultimately it’s also about choices and priorities. What kind of world do we want to live in? This, of course, also requires having full and accurate information about the total impact of the choices we collectively make.
The second news item is about plastic – specifically PET (polyethylene terephthalate) plastic. Plastic waste is a huge problem globally, with much ending up as microplastics in the environment. About 9% of plastic is recycled, while 12% is incinerated, and the remaining 79% winds up in landfills. This is not sustainable. At the same time, plastic is a very useful material, which is why we use so much of it. Some plastics are also hard to recycle – PET plastic, highly degraded plastics, and mixed plastics. This is where the new study comes in.
They report: “Herein, we demonstrate a chemical upcycling of PET waste into materials for CO2 capture via aminolysis. The aminolysis reaction products—a bis-aminoamide (BAETA) and oligomers—exhibit high CO2 capture capacity up to 3.4 moles per kilogram as a stand-alone organic solid material. BAETA shows strong chemisorption featuring high selectivity for CO2 capture from flue gas (5 to 20% CO2) and ambient air (~400 parts per million CO2) under humid conditions. Our thermally stable material (>250°C) enables CO2 capture at high temperatures (up to 170°C) for multiple cycles.”
Essentially they can chemically treat the PET to “upcycle” it to BAETA, which can be used for carbon capture. BAETA is stable at temperatures it would encounter in manufacturing flues and can store a lot of CO2. The result can then be sequestered, or the BAETA can be heated to release the CO2 for other use, freeing the BAETA to be used again. It’s still too early to say if this will become a major manufacturing process, but it does seem plausible.
Ideally this could divert a significant proportion of PET plastic from landfills, while providing a useful material for carbon capture, either ambiently or in factory flues. This would also not detract from existing plastic recycling efforts at it uses different plastics.
Individually developments like these will likely make little difference. They are just examples of the kinds of changes we can make to shift our economy overall into a more circular paradigm, minimizing waste, the need for endlessly increasing raw material streams, and overall environmental impact. Ultimately it’s just about doing things smarter. Pure economic forces are not enough. While they do favor things like efficiency, they also can incentivize externalizing costs, degrading the environment, and dumping waste. We need to leverage economic forces with smart regulations and incentives to make the final equation work for everyone.
The post Upcycling Plastic and Reducing Mineral Waste first appeared on NeuroLogica Blog.
September 4, 2025
Charting The Brain’s Decision-Making
Researchers have just presented the results of a collaboration among 22 neuroscience labs mapping the activity of the mouse brain down to the individual cell. The goal was to see brain activity during decision-making. Here is a summary of their findings:
“Representations of visual stimuli transiently appeared in classical visual areas after stimulus onset and then spread to ramp-like activity in a collection of midbrain and hindbrain regions that also encoded choices. Neural responses correlated with impending motor action almost everywhere in the brain. Responses to reward delivery and consumption were also widespread. This publicly available dataset represents a resource for understanding how computations distributed across and within brain areas drive behaviour.”
Essentially, activity in the brain correlating with a specific decision-making task was more widely distributed in the mouse brain than they had previously suspected. But more specifically, the key question is – how does such widely distributed brain activity lead to coherent behavior. The entire set of data is now publicly available, so other researchers can access it to ask further research questions. Here is the specific behavior they studied:
“Mice sat in front of a screen that intermittently displayed a black-and-white striped circle for a brief amount of time on either the left or right side. A mouse could earn a sip of sugar water if they quickly moved the circle toward the center of the screen by operating a tiny steering wheel in the same direction, often doing so within one second.”
Further, the mice learned the task, and were able to guess which side they needed to steer towards even when the circle was very dim based on their past experience. This enabled the researchers to study anticipation and planning. They were also able to vary specific task details to see how the change affected brain function. Any they recorded the activity of single neurons to see how their activity was predicted by the specific tasks.
The primary outcome of the research was to create the dataset for further study. So many of the findings that will eventually come out of this data are yet to be found. But we can make some preliminary observations. As I stated above, tasks involving sensory input, decision-making, and motor activity are widely distributed throughout the mouse brain. This activity involves a great deal of feedback, or crosstalk across many different regions. What implications does this have for our understanding of brain function?
I think this moves us in the direction of understanding brain function more in terms of complex networks rather than specific modules or circuits carrying out specific tasks. This is not to say there aren’t specific circuits – we know that there are, and many have been mapped out. But any task or function results from many circuits dynamically networking together, in a constant feedback loop of neural activity. This makes reverse-engineering brain activity, even in a mouse, extremely complex. Any simple schematic of brain circuits is not going to capture what is really going on.
Another observation to come out of this data is that sensory and motor areas are more involved in decision-making than was previously suspected. The motor cortex is not just the final destination of activity, where decisions are translated into action. They are involved in planning and decision making also. It remains to be seen what the ultimate implications of this observation are, but it is interesting from the perspective of the notion of embodied cognition.
Embodied cognition is the idea that our thinking is inextricably tied to our physical embodiment. We are not a brain in a jar – we are part of a physical body, and our brains map to and are connected to that body. They are one system. We interact with the world largely physically, and therefore we think largely physically. Even our abstract ideas are rooted in physical metaphors. This is why an argument can be “weak”, and a behavior might be “beneath” you, and we equate morality to physical disgust. A “big” idea is not physically big, but that is how we conceptualize it.
Embodied cognition might go even further, however. Our physical senses and the brain circuits involved in movement might also be involved in thinking. This make sense – we plan our movements with the pre-motor cortex, for example.
If we look at this question evolutionarily, what this may mean is that our higher and more abstract cognitive functions are evolutionary extrapolations or extensions of our physical functions. Mammalian brains first evolved to receive and interpret sensory information and to control our movements. Our ability to perceive and to move evolved greater and greater complexity, while emotional centers also evolved to drive basic behaviors, like hunger, fear, and mating. This all had to function as a coherent system, so circuits evolved with more and more integration of sensory and motor pathways, including a subjective experience of embodiment, ownership, and control. Eventually more abstract cognitive functions became possible, including social interaction, more sophisticated planning and anticipation, etc. But most things in evolution do not come from nowhere – they are extensions of existing functions. So it makes sense from this perspective that abstract cognition evolved out of physical cognition, and these roots run deep.
It is also interesting to think about the implications of this research in the context of AI. First, I suspect that AI will help us make sense of the massive amount of data and complexity we are seeing with mouse brain activity. Imagine when we attempt this level of detail with a human brain and human decision-making. Further, we can use this information to construct (by whatever methods) AI that more and more mimics the functioning of a human brain. These projects will likely feed off each other – using AI to understand the brain then using the resulting information to design AI. It’s possible that something like this will lead to an AI that replicates the functioning of a human brain.
But here is a question – if we do take this research to its ultimate conclusion, will that AI need to be incorporated into a physical body in order to fully replicate a human brain? Will a virtual body suffice? But either way – will it need sensory input and motor output in order to function like a human brain? What would a disembodied human brain or simulation be like? Would it be stable?
Whatever the result, it will be fascinating to find out. I think the one safe prediction is that we still have many surprises in store.
The post Charting The Brain’s Decision-Making first appeared on NeuroLogica Blog.
September 2, 2025
Detecting Online Predatory Journals
The World Wide Web has proven to be a transformative communication technology (we are using it right now). At the same time there have been some rather negative unforeseen consequences. Significantly lowering the threshold for establishing a communications outlet has democratized content creation and allows users unprecedented access to information from around the world. But it has also lowered the threshold for unscrupulous agents, allowing for a flood of misinformation, disinformation, low quality information, spam, and all sorts of cons.
One area where this has been perhaps especially destructive is in scientific publishing. Here we see a classic example of the trade-off dilemma between editorial quality and open access. Scientific publishing is one area where it is easy to see the need for quality control. Science is a collective endeavor where all research is building on prior research. Scientists cite each other’s work, include the work of others in systematic reviews, and use the collective research to make many important decisions – about funding, their own research, investment in technology, and regulations.
When this collective body of scientific research becomes contaminated with either fraudulent or low-quality research, it gums up the whole system. It creates massive inefficiency and adversely affects decision-making. You certainly wouldn’t want your doctor to be making treatment recommendations on fraudulent or poor-quality research. This is why there is a system in place to evaluate research quality – from funding organizations to universities, journal editors, peer reviewers, and the scientific community at large. But this process can have its own biases, and might inhibit legitimate but controversial research. A journal editor might deem research to be of low quality partly because its conclusions conflict with their own research or scientific conclusions.
There is no perfect answer. The best we can do is have multiple checks in the system and to make a carefully calibrated trade-off between various priorities. The system we have is flawed, but it basically works. High quality research tends to gravitate toward high quality journals, which have the highest “impact” on the community. Bad research generally doesn’t replicate well and will tend to get picked apart by experts. A new idea that is having trouble breaking through will tend to break through eventually – the virtue of being correct usually wins out in the end.
But eventually working out, mostly, in the end isn’t enough. We also want to know how efficient the whole system is. How quickly is fraud and bad research weeded out? We also want to make sure we are moving in the direction of improved research quality, and that the outcome of scientific research is translating to our society effectively and efficiently. This means we have to track trends – and one of those trends is the rise of so-called “predatory” journals.
Predatory or similar scientific journals result from basically two things – the ease of creating an online journal because of the web, and the open-access journal business model. The traditional journal model is based on subscriptions and advertising, which benefit from high quality, high profile, and high impact research. This has its trade-offs too, but overall it’s not a bad model. The open-access business model is to charge researchers for publishing their research, then make the results open-access to the world. This has the benefit of making scientific research open to all, and not hidden behind a paywall. But it creates the perverse incentive to publish lots of articles, regardless of quality. In many cases, that is what is happening.
Scientists and academics, once realizing the issue, have dealt with it by vetting new journals for their process and quality. They can then create essentially a black list of demonstrable low quality or even predatory journals (those that will publish and even solicit any low quality study to collect the publication fee, and then publish online with little or no editorial filter). Or you can create a white list of journals that have passed a thorough vetting process and meet minimum quality standards.
The problem is that this is a lot of work. New predatory journals are easy to create. Once they are identified and blacklisted, the company behind the journal can simply create a new journal with a slightly different name and URL. They are essentially outstripping the ability of academics to evaluate them.
One attempt to rein in the proliferation of such journals using a new AI program to screen journals for the probability that they are predatory. The researchers behind this effort published the results of their first search. They screened over 15,000 open access journals. Just that fact alone is a bit alarming – that is a lot of scientific journals. The AI flagged about 1,300 of them as probably predatory, and then human evaluators looked at those 1,300 and confirmed that about 1,000 of them were predatory – so the AI flagged about 300 false positives. Keep in mind, these thousand journals collectively publish hundreds of thousands of articles each year, which generate millions of citations.
There are lots of systemic issues at work here, and predatory journals are partly a symptom of these problems. But they significantly exacerbate these issues and are making it impossible for legitimate researchers to keep up. Developing new tools for dealing with this flood of low-quality research is essential. This one tool will not be enough, but perhaps it can help.
I think ultimately, a white list of properly vetted science journals, kept to a stringent standard of quality, is the only solution. This won’t stop other journals from popping up, but at least researchers and those in decision-making positions will be able to know if a piece of science they are relying on has been properly vetted. (Again, no guarantee it is correct, but at least it went through some legitimate process.)
Another aspect to this issue is the communication of science to the public. The existence of large numbers of low-quality journals easily accessible and shareable online means that anyone can easily find research to support whatever position they want to take. Further, AI is trained on this flood of low quality research. This makes it almost impossible to have a conversation about any scientific topic – which tends to devolve into dueling citations. Having open-access to scientific studies does not make everyone a scientist, but it makes it easy to pretend that you are.
The post Detecting Online Predatory Journals first appeared on NeuroLogica Blog.
August 25, 2025
Brightest Fast Radio Burst Discovered
The universe is a big place, and it is full of mysteries. Really bright objects, that can be seen from millions or even billions of light years away, can therefore be found, even if they are extremely rare. This is true of fast radio bursts (FRBs), which are extremely bright and very brief flashes of light in the radio frequency. They typically last about one thousandth of a second (one millisecond). Even though this is very brief, they still represent a massive energy output, and their origins have yet to be confirmed.
Recently astronomers have detected the brightest FRB so far seen, and it was relatively close, only 130 million light years away. That may seem far, but most FRBs are billions of light years away (again, indicating that they are relatively rare, because we need a huge volume of space to see them). Because this FRB was bright and close, it gives us an opportunity to examine it in more detail than most. But – this is also made possible by recent upgrades to the equipment we use to detect FRBs.
The primary instrument we use is CHIME (Canadian Hydrogen Intensity Mapping Experiment). As the name implies, this was developed to map hydrogen in the universe, but it is also well-suited to detect FRBs. So far, since 2018, it has detected about 4,000 FRBs. But because they are so brief, it is difficult to localize them precisely. We can see what direction they are coming from, and if that intersects with a galaxy we can say it probably came from that galaxy. But astronomers want to know where within that galaxy the FRB is coming from, because that may provide clues to confirm their origin. So they built “outriggers” – small versions of CHIME spread around North America to effectively increase the size of the CHIME detection area and significantly increase its precision. It was this new setup that detected the recent FRB. What did they find?
They were able to localize the FRB to its location within its host galaxy – the outer edge of what appears to be a star-forming region. At the middle of a star-forming region, the stars are very young, and as you go farther from the middle they get older. So that means any stellar source here is likely moderately old – not newly minted, but not ancient (as stars go). What does this tell us?
Let’s back up a bit and talk more about the current theories about FRB formation. The first FRB was detected in 2007. Since then we have detected over 4000, mostly by CHIME. The vast majority of FRBs are one-offs, meaning they happen once and never repeat. A small subset, however, are repeaters – they occur more than once from the same location. Most of this repeat at irregular intervals, and then eventually stop. However, a still smaller subset are regular repeaters. The most recent bright FRB is not a repeater (at least not within the last 6 years of observation).
Astronomers debate as to whether repeaters and non-repeaters have the same or similar source of if they are likely to be entirely different. But they acknowledge that they simply do not know.
In 2020 NASA also detected another important piece to this FRB puzzle – the first FRB from within our own galaxy. Because it was so close, we could see the object that it appeared to come from – a magnetar. Magnetars are rare and awesome objects as well. So far we only know of 40 magnetars (with a few more candidates being examined). They are essentially neutron stars, the remnants of large suns and the second densest objects in the universe after black holes. Rare neutron stars have powerful magnetic fields, a trillion times more powerful than Earth’s. If you were within 1,000 km of a magnetar you would die just from the magnetic field.
Magnetars themselves are a bit mysterious. We have theories as to what causes their powerful magnetic field, with the most prominent being that it is a magnetohydrodynamic dynamo – the rapid spinning of dense charge material in the neutron star. Magnetars pump out a tremendous amount of energy, including powerful gamma and X-ray radiation. For this reason they only last thousands to millions of years, then their magnetic field fades away and they essentially become normal neutron stars. It is their short lifespan that makes them rare in the universe.
So – the FRB from within our own galaxy was found to be coming from a magnetar. This is likely not a coincidence (statistically speaking), since magnetars are so rare. Therefore astronomers belief that magnetars are the likely source of at least some, if not all, FRBs. Again, the repeating and non-repeating ones may have different sources. This makes sense in that magnetars are very powerful objects, and FRBs are very powerful bursts of energy, so they are a plausible source.
At this point astronomers plan to continue to use the enhanced CHIME detector with the outriggers to monitor the skies for more FRBs, and try to locate them precisely within their galaxies. From this data we may see patterns that tell us something about their likely origin. We should see several hundred FRBs each year, so they will have lots of data.
The post Brightest Fast Radio Burst Discovered first appeared on NeuroLogica Blog.
Steven Novella's Blog
- Steven Novella's profile
- 246 followers
