Jay L. Wile's Blog, page 23

August 13, 2018

LED Lights Might Pose A Hazard for Vision

[image error]

Wavelengths coming from various light sources (image modified from the Tosini 2016 article linked below)

A very good friend showed me an article from the University of Toledo. It reports on a study that demonstrates how blue light might be damaging to the light-sensing cells found in your eye. I didn’t know anything about this, so I decided to look into the research that has been done on the effects of blue light on vision. I found this excellent review article, which discusses what has been figured out so far. The short answer is that we don’t know anything for certain, but there is some evidence that long-term, chronic exposure to significant amounts of blue light could be damaging to your eyes.

Several animal studies have shown that exposure to blue light can increase the animal’s risk of age-related macular degeneration (AMD) and other eye problems. However, studies on people haven’t been clear. Some studies have shown a relationship between long-term exposure to the sun’s light and AMD, and it is assumed that the blue light given out by the sun is the culprit. However, a case-controlled study in Australia indicated that it might not be exposure to the sun’s light that is causing the relationship. It indicates that sensitivity to glare and difficulty developing a tan are the actual indicators of higher AMD risk, and studies that show a relationship between the sun’s light and AMD might not be controlling properly for those variables.


The study that was discussed in the University of Toledo article linked above didn’t assess the damage blue light causes to human eyes. Instead, the authors assessed the damage on human cells. However, they didn’t use actual light-sensing cells from a human eye, because that’s not possible. They used HeLa cells, which are a line of cells that came from cancerous tissue taken from a woman named Henrietta Lacks more than 65 years ago. The cells continue to reproduce to this day, so this line of cells is often referred to as “immortal.” The story behind the acquisition of the cells is the topic of a very sad and interesting book as well as a pretty lousy movie.



Since HeLa cells didn’t come from Mrs. Lacks’s eyes, the authors of the study added a chemical to them that is normally found in the eye and is necessary for the light-sensing cells in the eye to work (retinal). Addition of the chemical had no effect on the cells. However, when the cells with that chemical were then exposed to blue light, it resulted in damage to the cells. Exposing the cells to blue light without adding the chemical produced no effect. Thus, the authors conclude that the chemical and blue light are both necessary to cause cell damage. Since the chemical is found with light-sensing eye cells, blue light probably causes damage in those cells.


How worried should people be? Once again, that’s not clear. Our biggest exposure to blue light comes from the sun. Look at the graph at the top of the page and concentrate on the wavelengths on the left (380-500 nm). Those are the violet and blue wavelengths. Notice that the sun’s light levels off at high intensity right around 465 nm, which is solidly in the blue range. Notice also that in order to be compared to the light given off by artificial sources, the sun’s light intensity had to be divided by 5.5. That’s because the sun’s light is much, much brighter than artificial light.


Now look at the light intensity coming from incandescent lights (the “old-style” light bulbs that use glowing filaments). It is very low in the blue region and highest in the red region (wavelengths of 620-750 nm). So in terms of blue light, incandescent lights aren’t much of a problem. Fluorescent lights are worse, with a strong peak at about 440 nm, which is on the border between violet and blue. But notice the intensity coming from LEDs. LEDs give you more blue light (450-470 nm) than any other color. That’s why some people are worried. LEDs are used in more and more home lighting applications, since they save money. They are also used in many cell phones and television/computer screens. Thus, when we spend a lot of time looking at those kinds of screens, we are getting more blue light than any other color of light.


As a side note, this is why different light sources look different. They all give you a mixture of many colors, and when many colors are mixed together, we see white. However, you have no doubt noticed that the light coming from an incandescent light bulb looks “warmer” than the light coming from a fluorescent light bulb. That’s because the light from an incandescent light bulb has a lot more red, orange, and yellow. In the same way, LED lights look white, but they seem “bluer” than other light sources because they produce more blue light than anything else. The light coming from the sun has the most “even” mix of colors, so it looks the most white.


So should we be worried about looking at LED screens a lot? Probably. While we don’t really know what the long-term risks are, there is enough evidence to indicate that too much blue light can be a problem. Of course, the sun is shining a lot more blue light at you than any LED screen, because the sun’s light is so intense. However, LED screens are clearly a potent source of blue light, and many people spend a lot more time looking at those screens than being in the sun. You can reduce the severity of the problem, however, because many companies produce blue filters for computer screens and cell phones. There are also blue-light-filtering glasses that you can wear.


Hopefully, more research will eventually indicate what the real threat is from various light sources, but for now, it’s probably best to be overly cautious.

 •  0 comments  •  flag
Share on Twitter
Published on August 13, 2018 07:07

August 9, 2018

Pre-Kindergarten Education Might Cause Long-Term Disadvantages!

[image error]


Many educators (and even more politicians) think that getting children into school early produces great educational benefits. However, the data suggest otherwise. Perhaps the most famous results come from the Head Start study by Puma and others. It found that while the Head Start preschool program produced some short-term benefits, those benefits disappeared for most of the students by third grade. Overall, then, the Head Start program had no lasting effect for most students.


To me, this makes perfect sense. After all, if you give a student some education before most of his or her peers, the student will be “ahead” when he or she starts kindergarten. However, since all the students are following the same curriculum, this “head start” doesn’t do much good, because in the end, the students with the advantage are held back. Rather than using the advantage to push them to learn even more, they are taught the same things that are being taught to the other children. As a result, the only real advantage is that the learning is easier at first. Also, since they have already been “socialized” into the group-learning mode used by schools, they don’t have to adjust to it. Once the others have adjusted, however, that benefit also goes away.


My publisher recently made me aware of another study that comes to an even less-promising conclusion. This study comes from the Tennessee Voluntary Pre-K program, a state-run pre-kindergarten (pre-K) program that focuses on children at risk. The authors followed a total of 2,990 students from kindergarten through 3rd grade, and the results weren’t in line with the expectations of the educators and politicians that promote pre-K education.



The students in the study were all applicants to the pre-K program. However, there were only 1,852 spots available. Those spots were randomly assigned to the applicants, resulting in 1,138 who applied but could not get into the program. Those students were the control group, and their educational outcomes (based on state records) were compared to those of the 1,852 students who could get in. Not surprisingly, when kindergarten started, the students who had been in the pre-K program were ahead of those who had not been in the pre-K program. However, those benefits were fairly short-lived.


By the third grade, the control children actually outperformed the children who had been in the pre-K program. While the amount was small, it was statistically significant for both math and science. In other words, from a long-term perspective, enrollment in pre-K programs produced a slight disadvantage, at least in science and math. Please note that the authors also got permission from the parents to evaluate 1,076 of the students in a more detailed fashion than just looking at their school records. They saw nothing in this more detailed analysis that was different from the analysis using just school records.


The authors sum up the situation well in their conclusion:



We are mindful of the limitations of any one study, no matter how well done, and the need for a robust body of research before firm conclusions are drawn. Nonetheless, the inauspicious findings of the current study offer a cautionary tale about expecting too much from state pre-k programs. The fact that the Head Start Impact study – the only other randomized study of a contemporary publicly funded pre-k program – also found few positive effects after the pre-k year adds further cautions (Puma et al., 2012). State-funded pre-k is a popular idea, but for the sake of the children and the promise of pre-k, credible evidence that a rather typical state pre-k program is not accomplishing its goals should provoke some reassessment. (emphasis mine)


I have a suggestion about how to make the gains in pre-K education last longer: Allow students with a pre-K education to learn at their own pace, regardless of what the other students are learning. That way, they can make the most of what they have already learned. The problem, of course, is that individualized learning is very hard to implement in a school, with the possible exception of a Montessori-type school. However, if you are homeschooling, it comes naturally. This is one of the many reasons why homeschooled students excel compared to their publicly- and privately-schooled counterparts. Generally, their parents start their education early, and because they are allowed to learn at their own pace, those early gains continue to pay off in the later years.

 •  0 comments  •  flag
Share on Twitter
Published on August 09, 2018 06:18

August 6, 2018

Another Illustration of How Little We Know About Climate Forecasting

[image error]


When you read about global warming, aka “climate change,” you often hear about climate models that tell us the world will reach dangerously high temperatures if people don’t sharply reduce their use of carbon-dioxide-emitting energy sources. However, these models are built using our current understanding of climatology, which is incomplete at best. As a result, there is a lot of uncertainty in their forecasts. Specifically, they seem to overstate any warming that has actually occurred so far.


Why is that? The simple answer is that we don’t understand climate science very well, and as a result, it is hard to predict what effects human activity will have on future climate. Scientists, however, need a more detailed answer. What exactly is wrong with our understanding of climate science? Christopher Monckton, Third Viscount Monckton of Brenchley thinks he has found one reason. Whether or not he is correct, his assertion illustrates how little we know about forecasting climate.


Now, of course, Viscount Monckton is not a climate scientist. He has a masters in classics and a diploma in journalism studies. He served as a Special Advisor to Prime Minister Margaret Thatcher and is and a well-known skeptic of the narrative that global warming is a serious problem that has been caused by human activity. Nevertheless, he has studied climate science extensively and thinks he has found a “startling” mathematical error that is common to all climate models. He is currently trying to get a paper that makes his case published in the peer-reviewed literature, but as the article to which I linked shows, the reviewers have serious objections to its main thesis.



Viscount Monckton essentially says that climate models are overstating warming because they are not taking climate feedbacks into account properly. When the average temperature of the earth increases, it affects many processes that occur on the earth, and some of those processes, in turn, affect the climate. For example, as temperatures increase, some soil that has been frozen for many years starts to thaw, releasing more greenhouse gases. That, in turn, will cause even more warming. This is an example of a positive climate feedback – a response to increasing temperature that will further increase temperature. Please note that while the idea of thawing soil further warming the planet is the conventional wisdom, actual experiments demonstrate the opposite.


Climate models, of course, have to take such feedbacks into account, and Viscount Monckton is saying that they are doing it incorrectly. Climate models right now judge the strength of the feedbacks based on the change in global temperature. If the earth’s temperature rises by 1 degree, then the feedback should be calculated based on that and that alone. Viscount Monckton says that this isn’t proper. In other applications where feedbacks are important, the effect of the feedback is based on the actual value of what is changing. In climate models, then, you have to calculate what the feedbacks are already doing at the current temperature, and then see how they change at the new temperature.


But wait a minute, isn’t that doing the same thing as basing the feedbacks on the change in temperature? Not according to Viscount Monckton. He says that when you base feedbacks on the change in temperature, you are ignoring the current state of the feedbacks. As a result, you are amplifying the effect that a changing temperature has on them. What you need to do is think about the current state of the feedbacks based on the current temperature, and then you have to see what change occurs for any new temperature. That results in much weaker effects from the feedbacks.


I have no idea whether or not Viscount Monckton is correct. He has been shown to be wrong before (claiming the title “Lord” when he is not a member of the House of Lords, for example), and his rhetoric is often over the top (saying he will lock up the “bogus scientists” that have caused the global warming scare, for example). Thus, he could very well be wrong about this.


Here’s the more important issue that this controversy brings to light: According to him, taking the feedbacks into account the way he thinks they should be taken into account produces a warming of 1.17 degrees Celsius when the amount of carbon dioxide is twice its pre-industrial value. This is roughly one-third of the current IPCC prediction of 3.0 degrees Celsius. So the way you take into account the effect of climate feedbacks produces nearly a factor of three change in the prediction!


Now let’s suppose Viscount Monckton is wrong and current climate models are taking the feedbacks into account properly. Still, do you really think we understand climate well enough to take into account all of the possible feedbacks? Even for the feedbacks we currently recognize, are we really modeling them properly? After all, as I pointed out above, experiments indicate that the effect of thawing soil is opposite that of the conventional wisdom. How many other feedbacks actually act opposite of the conventional wisdom?


If feedbacks are that important, I think we have some indication of why global climate models are overstating the current warming we see. It’s probably because they don’t include all the possible feedbacks and/or don’t understand the feedbacks they are trying to model.

 •  0 comments  •  flag
Share on Twitter
Published on August 06, 2018 07:22

August 2, 2018

The Scourge of Postmodernism in Universities

[image error]

The clock tower at a “college” where two biologists were the victims of postmodernism run amok. (click for credit)


I recently read an interesting piece in the Wall Street Journal entitled, “First, They Came for the Biologists.” If you didn’t catch it, the title is an homage to the words of Pastor Martin Niemöller, who opposed the Nazis in Germany. He spent seven years in a concentration camp as a result. Essentially, he is saying that we must fight injustice even if we don’t think it will affect us, because ultimately, it will. The author of the article, Dr. Heather Heying, says that postmodernism is taking aim at science, and if we don’t stop it, we will all suffer.


Dr. Heying is a former professor of biology at The Evergreen State College, which doesn’t seem to be much of a college. Instead, it seems to be a place where views that go counter to a loud group of students will result in harassment and intimidation. If you don’t know Dr. Heying’s story (which actually begins with her husband, Dr. Bret Weinstein), you can read it from their perspective here.


Essentially, Dr. Weinstein opposed a campus-wide activity that he considered to be racist, and as a result, he was branded a racist. The situation quickly turned toxic, and the couple feared for their safety. They sued the university, which settled for $450,000 plus $50,000 in legal fees. The couple resigned from the university when the settlement was reached. You can read more from Dr. Heying’s perspective here and Dr. Weinstein’s perspective here.


The story is truly sad and makes me worry about the future of higher education in these United States. When students can make utterly false allegations that end up being believed, no university professor is safe, period. However, Dr. Heying takes it further than that, and honestly, I have to agree with her. In her Wall Street Journal article, she makes this profound point:



Postmodernism, and specifically its offspring, critical race theory, have abandoned rigor and replaced it with “lived experience” as the primary source of knowledge. Little credence is given to the idea of objective reality. Science has long understood that observation can never be perfectly objective, but it also provides the ultimate tool kit with which to distinguish signal from noise – and bias.


I have discussed the nonsensical nature of postmodernism before, but I don’t think I fully appreciated its danger until reading Dr. Heying’s article and learning about her and her husband’s experience at The Evergreen State non-College. If the disciples of postmodernism have their way, many more institutions of higher learning will become hotbeds of irrationality like that sad little campus. If you don’t believe me, read this chilling quote from her article. The man she is quoting is the president of The Evergreen State non-College:



[What] we are working towards is, bring ’em in, train ’em, and if they don’t get it, sanction them.


When someone who runs a supposed college is willing to say something like that, there is something terribly wrong with the state of higher education in these United States.


If I sat down and had a conversation with Drs. Heying and Weinstein, we would probably disagree on a great many things. However, we would be in full agreement when it comes to the real danger that postmodernism poses to higher education.

 •  0 comments  •  flag
Share on Twitter
Published on August 02, 2018 04:51

July 30, 2018

Mathematics Leads Biologists to Discover a New Cell Shape!

In 1619, Orazio Grassi (a mathematician, astronomer and architect) wrote about three comets that had recently appeared in the sky. He gave evidence that they must have been far from the earth, even beyond the moon. Galileo wrongly believed that comets were in earth’s atmosphere, and so he wrote Il Saggiatore (The Assayer) in reply. Although Galileo’s overall argument was wrong, the piece does contain a statement that is quite profound:


Philosophy [he is referring to what we call “science” today] is written in this grand book, the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles and other geometric figures without which it is humanly impossible to understand a single word of it; without these, one wanders about in a dark labyrinth.


While most scientists who use this quote are talking about physics and perhaps chemistry, the fact is that mathematics seems to be the language in which God wrote His creation. As a result, all areas of science (even biology) require the use of mathematics to unlock the true secrets of creation. I recently ran across a paper that illustrates this point rather well.


The authors were using a mathematical technique called Voronoi Diagramming to model how certain cells in an embryo pack together to form the shapes of the organs that are developing. Generally speaking, most biologists assume that the cells become column- or bottle-like in shape so that they can squeeze together and form the smooth curves that characterize the shapes of the organs. However, their mathematical model predicted that another set of shapes would develop – shapes that are so unique they don’t even have a name. As a result, the authors call the shapes scutoids, which refers to the scutum and scutellum, features found on certain insects, like beetles.


[image error]

A rose chaffer beetle (click for credit)


Drawings of two scutoid-shaped cells are shown on the left side of the illustration below, and the way those two shapes fit together are shown on the right side of the illustration.


[image error]

The two drawings on the left represent scutoid-shaped cells. The colored drawing on the right shows how they fit together. (images from the paper being discussed)


Now, of course, the authors didn’t just believe the results of their mathematical model. After all, the mathematical model contains assumptions, and those assumptions could be flawed. However, armed with the knowledge of those shapes, they examined specific tissues in developing fruit fly larvae. Sure enough, they found tissue structures that are composed of scutoid-shaped cells.


Why do cells form these shapes in developing fruit fly embryos (and presumably other embryos)? The authors state:


Using biophysical arguments, we propose that scutoids make possible the minimization of the tissue energy and stabilize three-dimensional packing. Hence, we conclude that scutoids are one of nature’s solutions to achieve epithelial bending.


In other words, they produce the most stable tissue at the lowest energy.


Now remember, the reason the authors found this brand-new cell shape is because they started with mathematics, just as Galileo instructed. Writing more than 300 years after Galileo, Sir James Hopwood Jeans (English physicist, astronomer and mathematician) tells us why he thinks Galileo was right:


Lapsing back again into the crudely anthropomorphic language we have already used, we may say that we have already considered with disfavour the possibility of the universe having been planned by a biologist or an engineer; from the intrinsic evidence of his creation, the Great Architect of the Universe now begins to appear as a pure mathematician. [The Mysterious Universe, Cambridge University Press 1931, p. 122]

 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2018 06:39

July 23, 2018

This Could Be One of the Most Important Scientific Papers of the Decade

[image error]

What best explains the common features shared by animals? According to this study, it’s the fact that they are designed.


More than eight years ago (have I really been blogging that long?), I was excited to see the appearance of a new peer-reviewed journal, BIO-Complexity. I thought it was going to have a lot of impact on the science of biology, but so far, its impact has been minimal. A few good studies (like this one and this one) have been published in it, but overall, it has not published the ground-breaking research I had hoped it would.


That might have changed. I just devoured the most recent study published in the journal, and I have to say, it is both innovative and impressive. It represents truly original thinking in the field of biology, and if further research confirms the results of the paper, we might very well be on the precipice of an important advancement in the field of biological taxonomy (the science of classifying living organisms).


The paper starts by detailing the fact that while evolutionists have always hoped that living organisms can be organized into a tree of life (starting with one universal common ancestor and branching into all known organisms), that hope has never been realized. In particular, when we look at organisms on the genetic level, no consistent tree can be produced. Instead, a “tree-like” arrangement can be made, but it needs all sorts of rescuing devices to explain the many inconsistencies that crop up.


Nevertheless, the fact that the structure somewhat resembles a tree tells us something. It tells us that the organisms we see today contain a lot of commonalities. However, since no consistent tree can be constructed, it is doubtful that those commonalities are the result of evolution. How, then, can scientists understand the “tree-like” structure of biological relationships?


The author of this new paper, Dr. Winston Ewert, makes a suggestion that is both innovative and, at the same time, so obvious it makes me wonder why I haven’t heard it before.



He suggests that we look at organism relationships the way a programmer looks at relationships between different computer programs. These days, most computer programs aren’t written from scratch. There are standard modules that do specific functions, and most computer programs utilize those modules whenever they can. The only new computer code that is written performs functions that aren’t done by one of the available modules. For example, JavaScript (a popular coding language used in web applications) has two programs, one called “jsdom” and another called “node-gyp.” They do very different things, but they both depend on a module called “request,” which downloads files from the internet.


So these two computer programs share at least one commonality: the computer code that makes up the “request” module. If we didn’t know that these programs were created, we might suggest that perhaps they share a common ancestor which contained the “request” code, and that common ancestor passed it on to the lineage that gave rise to jsdom as well as the lineage that gave rise to node-gyp. But since we know both jsdom and node-gyp were created, we know that they simply share another created structure because they each need the function that it performs.


Now imagine a few complex programs that each do different things, but they each use many modules to get their jobs done. If we compared the programs, we might find some that share many, many modules and others that share only a few modules. Once again, if we didn’t know they were all created, we might say that the ones which share a lot of modules are closely-related on an evolutionary timeline, while the ones which share only a few common modules are distantly-related on an evolutionary timeline.


In other words, we might think that the relationships between these computer programs form a tree-like evolutionary structure. Of course, since the programs didn’t evolve from one another, the relationships between the programs wouldn’t fit perfectly into an evolutionary tree. Instead, like the current genetic relationships between animals, the relationships would form a tree-like pattern with many exceptions.


Is there something that explains the relationship between programs better than a tree-like structure? Yes. It’s called a dependency graph. In a dependency graph, computer programmers draw arrows from the individual programs to the modules that each program uses. For example, Dr. Ewert’s paper shows this simplified drawing for the dependency graph of certain JavaScript programs:


[image error]


Each rectangle represents a program, while each oval represents a module of computer code. The arrows are drawn from the programs to the modules that the programs use. For example, the rectangle on the top left represents the program called “sound-redux.” The arrows indicate that it uses the “lodash” module and the “react-redux” module. The arrows tell us that the lodash module is used by three other programs, while the react-redux module is used by two other programs.


How does this all relate to biology? Well, let’s assume that the animals we see were designed something like these JavaScript programs. The Designer had genetic modules (groups of genes that perform specific functions), and the Designer simply used those modules when an animal needed to perform those functions. If that is the case, we could represent the relationships between animals in a dependency graph like the one below:


[image error]


The rectangles on the bottom represent individual animals. The ovals represent groups of genes that perform specific functions. The “mammalia” oval, for example, is the group of genes that produce the basic characteristics of mammals (warm-blooded, hairy, nourish young with milk, etc.). The “Laurasiatheria” oval contains the genes used by animals that nourish their developing embryos with a placenta. The “Carnivora” oval represents the genes used by carnivores for metabolism. Now look at the middle rectangle on the bottom (domestic dog). It uses the Carnivora module (because it’s a carnivore), and the Carnivora module uses the Laurasiatheria oval (because carnivores nourish their embryos with a placenta), and that module uses the Mammalia module (because placental animals have the characteristics of mammals). So the domestic dog uses the Carnivora, Laurasiatheria, and Mammalia modules.


So here’s the question: Which represents our knowledge of animal relationships better? An evolutionary tree, or a design-based dependency graph? Well, the author of this paper has run several tests, and his conclusion is that the dependency graph does a much better job.


How can he say that? First, he wanted to make sure that a dependency graph is significantly different from an evolutionary tree. So he used an evolution simulator (EvolSimulator) that specifically simulates the supposed evolution of genes. He ran the simulation five times, changing the input parameters each time to get five different evolutionary scenarios. In each case, the genetic data were better ordered as a tree than as a dependency graph. Not surprisingly, then, the dependency graph doesn’t do a great job of showing the relationships among evolved genes.


He then analyzed a set of JavaScript applications. Not surprisingly, he found that a dependency graph described the relationships among the programs better than an evolutionary tree. Thus, at least when it comes to simulated evolution and computer programs, a dependency graph fits the designed things better than an evolutionary tree, but an evolutionary tree fits the evolved things better than a dependency graph.


Now we come to Dr. Ewert’s main test. He looked at nine different databases that group genes into families and then indicate which animals in the database have which gene families. For example, one of the nine databases (Uni-Ref-50) contains more than 1.8 million gene families and 242 animal species that each possess some of those gene families. In each case, a dependency graph fit the data better than an evolutionary tree.


This is a very significant result. Using simulated genetic datasets, a comparison between dependency graphs and evolutionary trees was able to distinguish between multiple evolutionary scenarios and a design scenario. When that comparison was done with nine different real genetic datasets, the result in each case indicated design, not evolution. Please understand that the decision as to which model fit each scenario wasn’t based on any kind of subjective judgement call. Dr. Ewert used Bayesian model selection, which is an unbiased, mathematical interpretation of the quality of a model’s fit to the data. In all cases Dr. Ewert analyzed, Bayesian model selection indicated that the fit was decisive. An evolutionary tree decisively fit the simulated evolutionary scenarios, and a dependency graph decisively fit the computer programs as well as the nine real biological datasets.


Now, of course, this isn’t the final word on the subject. Indeed, Dr. Ewert specifically says that a lot more research has to be done, fleshing out the exact nature of the dependency graph that describes animal relationships as well as adding non-animal organisms into the mix. Nevertheless, the early results are very encouraging.


It will be interesting to see how this kind of analysis progresses, but it looks like Dr. Ewert has uncovered a fundamentally new and better way of understanding biological relationships. If so, this is destined to be considered a revolutionary paper.


********

ADDENDUM: The author of this study contacted me. It turns out that he was homeschooled and used my courses. I am thrilled to think that I had a small impact on the education of someone who has authored such an excellent study!

 •  0 comments  •  flag
Share on Twitter
Published on July 23, 2018 05:27

July 19, 2018

Coming “Soon”: Science in the Atomic Age

[image error]

My elementary science series is finished, and I am currently working on a junior-high course that carries on with the theme of teaching science through history.

My elementary science series has been complete for a while now, and I am thrilled to see it becoming popular in the homeschooling community. The series teaches students science in roughly chronological order. It begins with the days of creation (Science in the Beginning). It then moves on to the ancient Greeks (Science in the Ancient World), teaching science in the order it was learned. It continues through roughly the end of the 1800s (Science in the Industrial Age). If a student completes all five books in the series, he or she will be very well prepared to start learning science in a detailed way in junior high school.

For quite some time, I have been hearing from homeschooling parents who would like me to continue the theme of teaching science through history into the junior high school years. With the encouragement of my publisher, I have decided to give it a try. I am currently working on Science in the Atomic Age. It will be a junior-high-school level course that covers many of the scientific discoveries made in the 20th and early 21st centuries. I plan to discuss the modern view of the atom, how atoms join together to make molecules, what the molecule known as DNA does, the cell as the basic building block of life, several of the advances that have been made in medicine, the structure and characteristics of the universe as a whole, radioactivity, and nuclear reactions.


While it carries the theme of teaching science through history, there will be at least four important differences between this course and my elementary courses. First, in the elementary course, I teach all of science on a timeline. As a result, the books change science topics constantly. In the first 15 lessons of Science in the Ancient World, for example, students start by learning the importance of math in science, then learn about the science of music, then learn about atoms, and then learn about medicine. It’s difficult to cover scientific advancements made in the 20th and 21st centuries that way, because the issues become more complex, and it is important to see the development of one particular topic over the years. So while I will still discuss science in the context of history, it will be by topic.


In other words, when I write about our modern understanding of the atom, I summarize what was known towards the end of the 1800s. I then step students through the history of atomic science, eventually ending with our current understanding of the atom. As a result, students see how the entire field of atomic science developed through the course of the 20th and 21st centuries. I then move on to molecules, but once again, I step back to the end of the 1800s and discuss the history of how our understanding of molecules developed. When I get to DNA, I “reset the clock” once again, starting with what was known at the end of the 1800s and working forward to the present day.


Now, of course, the focus is on the science, not the history. The students aren’t learning what happened in the world during the 20th and 21st centuries, unless those events affected scientific progress (like the World Wars). Instead, they learn the history of how scientists were led to our modern views. Because of this approach, students not only learn the science of what is being discussed, but they also learn the scientific reasoning used to reach our current understanding of scientific issues.


The second major difference between Science in the Atomic Age and the other books in my series is that it will be much longer. More science has to be covered in junior high school, so unlike my elementary books, this book is designed to be used every day. The third major difference is the frequency of experiments. In my elementary series, each time the student does science, he or she has a hands-on activity, usually an experiment. While there are still experiments and hands-on activities in this book, there aren’t as many. In a two-week period, students will do three or four experiments or activities. That means the student will be expected to do more reading in this course.


The final major difference is that the book switches from a notebooking approach when it comes to reviewing the material to a question/answer approach. There are “comprehension check” questions the student needs to answer while he or she is reading. Then, at the end of each chapter, there is a chapter review to help the student remember everything that was learned. Finally, there is a test for every chapter. In order to prepare the student for high school and beyond, it is important that the student answers all of those questions and takes the test.


I have only just begun writing this course, so it won’t be available for this academic year. However, I hope to be able to choose a group of students to “field-test” the book for the 2019-2020 academic year. However, that will depend on my progress. Assuming that I can start the “field-testing” on time, the book should be ready for general use in the 2020-2021 academic year. Of course, the Lord might have quite different plans, so stay tuned to find out what happens!

 •  0 comments  •  flag
Share on Twitter
Published on July 19, 2018 05:46

July 16, 2018

No, We Won’t Have Dinosaurs in Two Years!

[image error]

Even with a LOT of tinkering during embryonic development, only minor changes could be made in a chick’s skull.

(image from Bhullar et. al. article linked below)


A little while ago, I wrote an article about how silly “science journalism” can get. The article was about the popular media’s claim that scientists were about to bring mammoths back from extinction. I explained how the idea was based on real research, but the goal of the research was not to bring mammoths back from extinction. In addition, if anything concrete comes from the work, it will probably be decades from now. In response to that, a student sent me an even sillier article, which comes from that bastion of journalistic integrity, People. It states the following:



Famed paleontologist Dr. Jack Horner, who’s been a consultant on all four films and is the real-life inspiration for Jurassic Park’s dinosaur expert Dr. Alan Grant, believes we’re (optimistically) just five years away from genetically engineering a dinosaur.


This article was written back in 2015, so based on Dr. Horner’s optimistic projection, we should be just two years away from having dinosaurs roaming around in some laboratory.


So what is the source of Dr. Horner’s optimism? He thinks that birds evolved from dinosaurs, so he thinks that we could genetically “turn back the clock” and transform a bird into a dinosaur. He claims that this has already been done to some extent:



In what Horner calls a definitive “proof of concept,” a group at Harvard and Yale “just recently, within the last few weeks, were able to transform the head of a bird back to actually reverse-engineer the bird’s snout back into a dinosaur-like snout.”


There are so many things wrong with that statement, it is hard to know where to start. However, I will give it a try.



First, the study to which Dr. Horner refers had nothing to do with genetic engineering. As a result, I have no idea how it could be “proof of concept” that we are going to be able to genetically engineer dinosaurs! What the scientists actually did was manipulate the chemical environment to which developing chick embryos were exposed. They added chemicals that would block the activity of two proteins that they determined were influential in the development of the bird’s facial features, including the beak. They ended up producing embryos whose skulls were deformed compared to normal chick embryos.


Second, the changes produced were, in fact, rather minimal. Look at the picture at the top of this post, which comes from the scientific article. The left skull is of a chick whose chemical environment wasn’t changed. The right skull is of an alligator embryo. The middle skull represents what the research produced. Notice that while the middle skull might be more “alligator like” in some respects, it is still clearly a chick’s skull. Indeed, as the BBC reports, the lead author himself noted that the changes were not all that unusual.



“These weren’t drastic modifications,” says Bhullar. “They are far less weird than many breeds of chicken developed by chicken hobbyists and breeders.”


So in the end, this “proof of concept” for genetically engineering chickens into dinosaurs couldn’t even produce something stranger than what chicken breeders are producing right now!


Third, even if the skull modifications produced by this “proof of concept” experiment were significant, the skull is just one of many differences between birds and dinosaurs. There are lots of structural differences between the two types of creatures, and even if you can produce all the structural changes necessary, they are probably the least important ones. The differences in the biochemistry of birds and dinosaurs are probably huge, and all of that must also be genetically engineered in order to produce an animal that not only looks like a dinosaur but can also survive.


Fourth, and probably most importantly, this is all based on the notion that dinosaurs evolved into birds. While this view is all the paleontological rage right now, we don’t really know if it is true. There are a lot of problems with the idea that birds came from dinosaurs, even if you believe in evolution on such a grand scale (I do not). But assuming that dinosaurs did evolve from birds, the idea that you could “reverse” such an evolutionary process is pretty crazy. The mammoth project that I mentioned at the beginning of the article isn’t even trying to do that, and the evolution from mammoth to elephant is significantly easier to fathom than the supposed evolution from dinosaur to bird!


So are we just a few years away from genetically engineering a dinosaur? Of course not! In fact, I think that this particular line of research is destined to fail, because it is based on the false notion that dinosaurs evolved into birds. Even if I am wrong about that, however, I can guarantee you that we will have mammoth-like elephants much sooner than dinosaurs, and as I said previously, that’s probably decades away.

 •  0 comments  •  flag
Share on Twitter
Published on July 16, 2018 06:10

July 12, 2018

“Nylon”-Digesting Bacteria are Almost Certainly Not a Modern Strain

[image error]

This marine bacterium has the ability to digest nylon waste products, despite the fact that it doesn’t live in an environment that contains nylon waste products. (click for credit)


Evolutionists are fond of stating “facts” that aren’t anywhere near factual. For example, when I was at university, I was taught, as fact, that bacteria evolved the genes needed to resist antibiotics after modern antibiotics were made. As with most evolutionary “facts,” this turned out to be nothing more than wishful thinking on the part of evolutionists. We now know that the genes needed for antibiotic resistance existed in the Middle Ages and back when mammoths roamed the earth. They have even been found in bacteria that have never been exposed to animals, much less any human-made materials.


Of course, being shown to be dead wrong doesn’t produce any caution among evolutionists when it comes to proclaiming the “evidence” for evolution. When Dr. Richard Lenski’s Long Term Evolution Experiment (LTEE) produced bacteria that could digest a chemical called “citrate” in the presence of oxygen, it was hailed as definitive “proof” (a word no scientist should ever use) that unique genes can evolve as a result of random mutation and selection. Once again, that “fact” was demonstrated to be wrong in a series of experiments done by intelligent design advocates. They showed that this was actually the result of an adaptive mutation, which is probably a part of the bacterial genome’s design.


Recently, I learned about an impressive genetic study by young-earth creationists Sal Cordova and Dr. John Sanford. It lays waste to another evolutionary “fact” I was taught at university: the recent evolution of nylon-digesting bacteria. The story goes something like this: In 1975, Japanese researchers found some bacteria, which are now charmingly named Arthrobacteria KI72, living in a pond where the waste from a nylon-producing factory was dumped. The researchers found that this strain of bacteria could digest nylon. Well, nylon wasn’t invented until 1935, and there would be no reason whatsoever for a bacterium to be able to digest nylon before it was invented. Thus, in a mere 40 years, a new gene had evolved, allowing the bacteria to digest something they otherwise could not digest.


Of course, we now know that this story isn’t anywhere close to being true.



First, a couple of minor points. In their landmark study, the Japanese researchers specifically state that the bacteria were isolated from the soil, not a pond. So the common version of this evolutionary myth can’t even get the origin of the bacteria correct. Second, the bacteria were not capable of digesting nylon. They were capable of digesting nylon waste products. As Cordova and Sanford point out in their paper, nylon is made of long molecules that enzymes aren’t typically able to break down. In order for the enzymes to work, the molecules must be much shorter than nylon molecules. So Arthrobacteria KI72 bacteria don’t digest nylon. They digest broken-down bits of nylon molecules. Such sloppy scholarship is typical of evolutionary evangelists, but that’s not the problem with the myth. The problem is that the ability to digest nylon waste products is incredibly common in bacteria from a diverse set of environments.


Since Arthrobacteria KI72’s ability to digest nylon waste products was discovered, lots of other bacteria that can do the same thing have also been discovered. For example, the marine bacterium pictured above, Bacillus cereus, has been shown to be able to digest nylon waste products. Two other marine species, Vibrio furnisii and Brevundimonas vesicularis, can do the same thing. Anoxybacillus rupiensis, a bacterium that lives in hot soils in Iraq, has also been shown to be able to digest nylon waste products.


Cordova and Sanford wanted to see if they could find out just how common the ability to digest nylon waste products is among bacteria, so they looked at the NCBI database, which contains genetic data on many species of bacteria. They searched for genes that are similar to the known nylon-waste-digesting genes, commonly called “nylonase genes.” The similarity had to be strong enough to indicate that those genes would allow the bacterium to digest nylon waste products. They found a total of 355 different bacterial species that had such genes! The bacteria come from diverse environments, including one (Cryobacterium arcticum) that lives in arctic soils, far removed from human activity. As the authors state:



Our analyses indicate that nylonase genes are abundant, come in many diverse forms, are found in a great number of organisms, and these organisms are found within a great number of natural environments.


Now remember what the evolutionary myth about nylon-waste-digesting bacteria says. It says that such genes evolved only after 1935. Do you really think that genes which evolved so recently would spread to at least 355 different species of bacteria in “a great number of natural environments” in under 80 years? That’s awfully hard to believe.


Of course, fervent evolutionists regularly believe things that stretch the limits of credulity, so let’s say that this kind of rapid gene dispersal is possible. Cordova and Sanford’s study still invalidates the myth, because the crux of the myth is that there is no reason whatsoever for a bacterium to have a gene that allows it to digest nylon waste products if nylon waste products aren’t around to be digested. However, there aren’t nylon waste products in the vast majority of environments in which these nylon-waste-digesting bacteria are found. Thus, we know for certain that bacteria do, indeed, carry around the gene for nylon-waste digestion, even when there is no nylon waste to digest.


That’s not the end of Cordova and Sanford’s work. They use another biological tool, UNIPROT, which gathers information on proteins that are produced by organisms in nature. They found that there are 1,800 different organisms that produce enzymes which should be able to digest nylon waste products, based on standard biochemical calculations. In other words, the ability to digest nylon waste products seems to be all over creation. They end their paper by analyzing the two competing models for how evolutionists thought nylonase genes evolved. They show that neither model works based on our current knowledge of genetics.


While the last part of the study is interesting, evolutionists can always come up with another model that contains even more wishful thinking. Regardless of the model employed, the ubiquity of nylonase genes and nylon-waste-digesting enzymes show that nylonase genes did not evolve in response to the production of nylon.

 •  0 comments  •  flag
Share on Twitter
Published on July 12, 2018 07:08

July 9, 2018

Print Reading versus Digital Reading: Which Produces Better Comprehension?

[image error]


I saw this Science Alert article come across my Facebook feed a few days ago, and I read it with interest. Written by two researchers from the University of Maryland, it makes some pretty strong statements about the effectiveness of reading a digital article compared to a print article. Essentially, the researchers say that for specific kinds of articles, students’ comprehension is better if the article is read in print form as opposed to digital form. They make this statement based on a review of the studies that already exist as well a study they published two years ago. While I think they are probably correct in their assessment, I am struck by how small the difference really is.


For the purpose of this article, I will concentrate on their new study. In their Science Alert article, they refer to it as three studies, but it is published as a single paper. In the study, they had 90 undergraduate students who were enrolled in human development and educational psychology courses read a total of four articles: two digital and two in print. Two of them were newspaper articles and two were excerpts from books. They were all roughly the same length (about 450 words). They dealt with childhood Autsim, ADHD, Asthma, and Allergies. Presumably, all of those topics would be of interest to the students, given the classes in which they were enrolled.


Before they did any reading, the students were asked to assess themselves on their knowledge of the four topics about which they would be reading. They were also asked which medium they preferred to read: digital or print. They were also asked about how frequently they used each medium. They were then asked to read the articles, but the order in which the articles were read changed from student to student. Some would switch between digital and print, while others would read the first two in one medium and then the second two in the other medium. That way, any effect from switching between the media would not be very strong.


After each reading, students were asked to identify three things: the main idea of the article, its key points, and any other relevant information that they remembered from the article. The researchers had asked the authors of the articles these same questions as well as two independent readers. Those were considered the correct answers. Two trained graders independently compared the students’ answers to the correct answers, and the grades they assigned were in agreement 98.5% of the time. For the 1.5% of the time they didn’t agree, they then discussed the grading and came to a mutual agreement.


After all four readings and tests, the students were then asked in which medium they think they performed best. As you will see, that’s probably the most interesting aspect of the study.



What were the results? Students, on average, preferred the digital medium, but the amount of preference changed depending on the situation. Students heavily favored the digital platform for newspaper and academic reading. They also preferred the digital platform for “fun” reading (like reading on vacation), but not as strongly. What about the students’ comprehension of what they were reading? Here is the table the researchers present in their article:


[image error]


These are the combined results, averaging all four readings done by each student. The “Mean Score” is the average, while “SD” is the standard deviation, which is a measure of how much variation exists among the subjects. When the standard deviation is low, the subjects scored very similarly. When the standard deviation is high, there are big differences among the subjects’ scores. The “Maximum Score” is what a student would get if he or she answered every question perfectly.


Now here’s the thing you have to understand about comparing people: since each person is different, you expect differences to exist naturally. As a result, if you do a study like this, some of the differences you measure between print and digital will come from the inherent differences among the students, not the differences between the media themselves. Those inherent differences produce what is referred to as statistical error. A nice rule of thumb is that the statistical error depends on the square root of the number of data sets you have. Take the square root of your number of subjects, divide by your number of subjects, and multiply by 100. That will give you a rough estimate of the statistical error in your study.


So, for this study, a rough estimate of the statistical error is the square root of 90, divided by 90, times 100. That’s about 11%. So, if the differences between the scores for digital and print are within 11% of one another, you can’t say they are different. They could very well be the same, and the differences you see could be the result of statistical error. Using this rough estimate, then, you can’t say that any of the differences between digital and print are real. All of the differences in the mean scores could very well be the result of statistical error.


So does that mean that this study doesn’t really say anything about the difference between digital and print? Not exactly. Since all three digital scores are lower than all three print scores, there might be some difference. If the error were truly statistical, you would expect that the digital score might be higher in at least one of the three cases. It is not. So in the end, there probably was less comprehension among the students when they read the digital articles, but the effect was pretty small (less than the statistical error).


Here’s the fascinating part of the study: The students were asked which medium they think gave them the best scores. They weren’t shown their scores. They were just asked what they thought would happen when their digital scores were compared to their print scores. 69% said that they scored better using the digital medium, while only 18% said that they scored better using print. The others thought that the medium didn’t affect their scores. So while the majority of students thought they were comprehending more using the digital medium, they were actually comprehending more using print! That result, more than any other, is the one that stands out to me. Why would students not be able to judge the medium from which they learned best?


In the end, I think this is a good study, but I don’t think the results are definitive. A similar study with a lot more participants might help better pin down the differences between print and digital media. In addition, a similar study should be done using longer articles to see what difference that produces.

 •  0 comments  •  flag
Share on Twitter
Published on July 09, 2018 06:53

Jay L. Wile's Blog

Jay L. Wile
Jay L. Wile isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Jay L. Wile's blog with rss.