By
Clemens Lode
,
January 21, 2022
Newton's cradle balls (source: pexels).

An Introduction to Science

This is an excerpt from the book series Philosophy for Heroes: Continuum.
It is impossible for someone to dispel his fears about the most important matters if he doesn’t know the nature of the universe but still gives some credence to myths. So without the study of nature there is no enjoyment of pure pleasure. —Epicurus, Principal Doctrine 12

Why are ontology and epistemology simultaneous?

As Epicurus put it wisely, for someone to enjoy life, it is necessary to also study what goes far beyond the knowledge required for daily life. It is not enough to be able to refute arguments or dismiss questions about the world. If we do not explore the unknown, that unknown will cause us to be superstitious or will gnaw on our self-confidence. It is not enough to be able to tell what something is not. Simply pointing out that something contradicts the existing body of knowledge does not explain our own role in the world; this can leave us feeling unsettled.

Luckily, at this point, we can build upon a foundation from Philosophy for Heroes: Knowledge. Science is but a branch of philosophy and we can use the results from our studies of philosophy as a basis for our scientific inquiries. We can go from the bottom (philosophy) up (to science), but we cannot go from the top to the bottom. If a scientific experiment somehow refutes the very philosophy we are using to conduct the experiment, we must have made a mistake somewhere along the way.

On the other hand, ontology (“what is”) and epistemology (“how do we know”) are intertwined. New insights into how our own cognition works and how we interact with the world can lead to changes in our philosophy. If a new discovery leads to changes of philosophy, experiments that depended on our previous (false) assumptions have to be repeated until both philosophy and science are again in harmony.

Example In Ancient Greece, people believed rays emitted by our eyes caused us to see. It took until the 10th century, when the astronomer and physicist Ibn al-Haitham invented the first pin-hole camera, to get an idea of how our eyes (and ultimately our visual perception) actually work. A new insight of “what is” led to a change in “how do we know.”
Ontology and epistemology are simultaneous—what exists and how we know it form a foundation of philosophy.

This is the philosophical background one must keep in mind when looking at interpretations of empirical data. When such interpretations result in (apparent) contradictions, then we have to check our basic philosophic premises. We do not need to spend time trying to force contradictory interpretations into our philosophical or scientific systems.

When it comes to contradictory data, though, that contradicts results from previous scientific experiments, this is something else altogether. Here, we need the scientific method in order to sort out falsehoods from truths. That is the task of science and also the main difference between science and philosophy: in science, we can rely and build upon on repeatable experiments; in philosophy, we either get the whole tree of knowledge right or wrong, and then try to sharpen our whole view of the world iteratively. We cannot divide philosophy into independent parts and run experiments on them.

SCIENCE ·  Science is the formalized process of gaining new knowledge from observation, deducting new knowledge from existing knowledge, and checking existing knowledge for contradictions.

Before the Scientific Method

What situation led to the first documented application of methodical scientific research?

Science has played a role in the lives of humans since they began building tools more than three million years ago. Techniques were learned, presented, taught, tested, and further developed. Using only trial and error, the progress was limited to directly applicable and testable knowledge. For example, a production process that resulted in a spear that flew farther and more accurately than previously used spears was copied while others were discarded.

While this type of “pre-science” did not follow a formalized progress like modern science with its theories, it followed the principle of making an assumption of reality (thinking that a certain tool or weapon could solve a problem), and then testing that assumption in the form of an experiment, with others trying to copy the result on their own, creating similar tools.

Compare this approach with rituals before a hunt or harvest. Without a formalized process of conceptualization, testing, and analysis, selective perception caused people to amass “knowledge” about connections in the world that was objectively false. Nobody tried skipping rituals for a few years to test whether the hunt or harvest was affected. Instead, the issue was addressed in an intuitively human way: attributing a consciousness to the world, and then seeing a sacrifice to the world as a form of trade.

Tool production aside, the real birth of methodical science can be found in medicine. While the use of plants, dental work, wound treatment, etc. probably have a long history, the techniques were taught only verbally through stories. For a written record, we have to go forward to 1550 BC: the Egyptian Edwin Smith Papyrus [cf. Aboelsoud, 2010] is the first known medical text dealing with wound treatment. At that time, large-scale war had become a part of civilization. Armies with mass-produced weaponry became commonplace, which also meant that on the battlefield, there were thousands of similar injuries. This paved the way for systematic medical research as—just like with the manufacturing of a spear—different treatments could be tested, copied, and refined. Records made it possible to connect the injury with the treatment and the outcome.

Large-scale war with mass-produced weaponry led to similar injuries on many battlefields. This allowed systematic medical research with different treatments.

Placebo Effect

How can we determine if a medical treatment was effective, not just the body healing itself?

This research did not help with individuals where it was difficult to determine the connection between treatment and success. One had to rely on hearsay to select a treatment. If the patient survived, the doctor tried to apply a similar treatment for the next patient, not knowing whether the previous patient healed on his or her own or whether the treatment was actually effective.

That, again, lead to pseudoscience. If the “successful” treatment included praying to the gods, dancing around the patient, pouring “holy water” on the patient, and giving the patient a special herb to eat, the ancient doctors tried to repeat exactly that ritual, unsure which action caused the healing. The doctors also had no formal method of describing their treatments or noting down the statistics of success and failure. And, ultimately, where was the drawback of praying to the gods, using lucky charms, or reciting a “magic spell”?

REVERSAL OF THE BURDEN OF PROOF ·  Using the argument of the reversal of the burden of proof , you try to evade the necessity to give proof for your own arguments and instead present the opposite of your argument, and ask the other person for proof. The basic (and wrong) premise of the reversal of the burden of proof argument is that anything that cannot be disproven must be true. This is a fallacy because you often cannot prove a negative without being omniscient.

Praying, lucky charms, and magic spells do not cause harm and do not influence the outcome, so it was hard to eliminate them from the whole ritual which might have contained one element that helped the healing process. Without a rigorous scientific process to determine a causal relationship, it was hard to differentiate the ingredients of a treatment that actually helped and practices that were added due to ritual.

In that regard, one of the main challenges those early doctors faced was that the body has the ability to heal itself. This lead to “false positives,” meaning they misattributed healing to their own actions instead of the body. Ultimately, the “placebo effect” might have played the biggest role in early medical practice. Yes, praying, singing, and dancing around the patient might not have had a direct effect on the illness of the patient, but if the patient just believed he would get better, it actually helped.

Modern studies show clearly the biochemical effects of the placebo effect in the brain of a patient [cf. Colloca and Benedetti, 2005]. Studies have shown that just fifteen minutes of soothing music significantly lulls a patient so that pain medication can be reduced [cf. Mithen, 2007, p. 96]. Either way, we have to ultimately say that the ancient methods were not scientific. There was a lack of understanding of why something worked out well. Without that knowledge, the techniques could be applied only to similar cases. Addressing new forms of illnesses required starting from scratch.

Remembering what we have said about the conceptual tree of knowledge, we can clearly see that little to no conceptualization, and thus understanding, about medicine occurred during that time. A method based on trial and error can quickly produce results, but without understanding, they are difficult to apply to future situations. Still, we must not simply dismiss those early practices. Instead, we can learn from them and re-examine our own medical research: not that we should also start dancing, but that we need to make sure that the patient is convinced that the treatment will work.

A method based on trial and error is difficult to apply to future situations if there is no comprehension of what actually worked. In medicine, because of the placebo effect, even without knowing what worked, a positive belief alone can produce results. Likewise, without such a belief, even a comprehension of what worked can lead to a treatment failure.

The Scientific Method

Now I am going to discuss how we would look for a new law. In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is—if it disagrees with experiment it is wrong. That is all there is to it. —Richard Feynman, Messenger Lectures, The Character of Physical Law [Feynman, 1964]

What major problems stood in the way of early researchers before the scientific method?

The term “scientific method” was primarily influenced by the philosopher Karl Popper, who built the method upon the works of previous philosophers, back to Aristotle. But the Greek tradition was certainly not the singular reason that the Scientific Revolution started in Europe and not in Asia. From Asia we inherited inventions like soap, the crank-shaft, quilting, the pointed arch, surgical instruments, windmills, the fountain pen, the numerals, checking accounts, gun powder, and last but not least the idea of having beautiful gardens.

But after the Mongol invasion, large parts of Asia fell into a period of isolationism, while in Europe, religious instability allowed new ideas to flourish. Ironically, one of the reasons for this instability was caused by the Black Death, the spread of which could have been reduced by better sanitation, the availability of soap, and scientific literacy. In a way you could argue that religion no longer was able to provide a safe haven for the people so they turned to studying science. The scientific method took the place previously held by religious authority: theories no longer required an authority, anyone could recreate the experiments and judge the results for themselves.

To understand the scientific method, you need to understand that science itself is a game. If you want to take part in it, you have to follow its rules: the scientific method. You can, of course, make discoveries using different rules. Those discoveries could very well be applicable and even successful, but they would not be scientific. And that is OK. Just because the scientific method has such a good reputation does not mean you have to do everything in your power to take part in the game of science. That said, without applying the scientific method, you cannot claim that your results are “scientific.”

To understand the scientific method, it is best to not just learn its parts, but instead to find out why each part makes sense and what obstacle it solves. Let us look at four major problems that stood in the way of early researchers before the scientific method:

  1. People attributed phenomena to a world of ghosts, gods, or dreams.
  2. People either rushed to conclusions, or research projects ran forever because people became enamored with their findings or hesitated to admit a failure.
  3. People were biased about their findings.
  4. People did not share or document their work.
Before the scientific method, research was limited by bias, a reliance on supernatural explanations and premature conclusions, the absence of a scientific community, and the lack of willingness to admit failures.

What was new about the scientific method?

The first step of the scientific method is to observe nature and be curious instead of just accepting the status quo. Asking “why?” about phenomena that are widely accepted is where scientific progress begins. And do not just ask, “Why?” Ask, “Why why why why why…?” until you have found the root causes of a problem or phenomena [cf. Ohno, 2006].

The second step of the scientific method is to gather data and guess what the connection between the observed phenomena is. But just stating your conclusions about how the world works is not enough, you need to formulate a hypothesis. A hypothesis is more than a statement based on your observations, it needs to be posed in an if/then form to clearly define the scope and must not have a predetermined outcome. The conclusion must not be already set before the actual investigation begins, as this would harm the objectivity of the research. For example, instead of starting with “all swans are white”—which would raise the question to which swans you are referring—you say “in the summer, all swans in the lake downtown are white.” People can then go to the lake, observe what type of swans land there for one season and either support or refute your hypothesis. If all you have is a vague idea about how the world works, you would simply just add new evidence as “special cases” but never really go back to the drawing board to fundamentally examine your views.

Third, too easily, one can become enamored with one’s findings. With all the time invested, it might be hard to stop when the initial research does not show the desired results. If you do not clearly define when a research project needs to be curtailed, it might run forever. For this, an additional null hypothesis, a definition of when the experiment to test the hypothesis has failed, needs to be defined. It states that what you have observed before were simply random occurrences.

The opposite, the previously mentioned hypothesis, is called the alternative hypothesis. It is the connection between different phenomena that you expect to be true based on earlier observations. It is a possible explanation for the observed phenomena. For example, a null hypothesis could be “The water quality did not change over the past 10 years,” while the corresponding alternative hypothesis could be “The water quality improved over the past 10 years.” To prevent your ego from ultimately being detrimental to your research, it is important to make a very specific prediction to define what exactly you think is true about the alternative hypothesis, as well as a very specific and detailed experimental design to help you to either support or refute the alternate hypothesis. If the prediction with the specific experimental design does not come true, the hypothesis is wrong, no matter how beautiful the idea was and no matter what title is held by the person who provided the hypothesis.

Fourth, the experiment has to be run and the data collected and analyzed. The scientific method here directs scientists to other fields, like logic or statistics, to remove the “intuitive” evaluation and replace it with an objective one. There are hundreds of logical and statistical fallacies that each address an intuitive interpretation. Sometimes, there is simply just a correlation between two phenomena, but no real causation.

Example Studies show that people who drink wine live longer. But this is only a correlation. The real cause might simply because more social and more healthy people tend to drink wine. They might live even longer without the wine.

As the fifth and final step, a conclusion must be given. You have to be so objective to look directly at the result and accept the null hypothesis if the experiment results do not fit the prediction, instead of trying to manipulate the experiment to show what you had originally anticipated. This objectivity can best be achieved by documenting and sharing your work with peers who will then try to recreate your experiment given the described conditions.

The process known as the Scientific Method outlines a series of steps for answering questions, but few scientists adhere rigidly to this prescription. Science is a less structured process than most people realize. Like other intellectual activities, the best science is a process of minds that are creative, intuitive, imaginative, and social. Perhaps science is distinguished by its conviction that natural phenomena, including the processes of life, have natural causes—and by its obsession with evidence. Scientists are generally skeptics. —Neil A. Campbell, Biology
Science is a collaborative enterprise spanning the generations. When it allows us to see the far side of some new horizon, we remember those who prepared the way […] —Carl Sagan, Cosmos: Blues for a Red Planet

With these ideas in mind, it is better understood how superstitions of the Middle Ages and ancient times were so popular. People not only had all the difficulties we face today discovering the truth, but they also had no scientific method on which to rely.

Interestingly, when examining our process of cognition, we discover that it is based on principles that are very similar to the scientific method. We make observations, discover unknown information, integrate that into existing knowledge, and try to make connections (form a concept). In order to be sure that our thought process was correct, we reflect upon it, maybe even multiple times, using multiple experiences or the help of others. Read more in Philosophy for Heroes: Knowledge.

Other than external factors, an additional reason the Scientific Revolution started in the West might have been the different philosophical approach to the universe. While Eastern philosophy is focused on a holistic view of the world, in the Western world, the universe was seen as something constructed and researchers saw as their task to uncover “God’s plan”—like a machine, if you only knew its parts, you would know the whole thing. In order to manage the complexity of nature, they divided the universe into entities and studied them separately. The scientific method even demands that the observer is separate from the universe, so as not to disturb the experiment. It was only with the dawn of quantum theory that science re-integrated a more holistic view of the universe. Read more in Philosophy for Heroes: Knowledge.

Summary

Ontology and epistemology are simultaneous—what exists and how we know it form a foundation of philosophy.
Large-scale war with mass-produced weaponry led to similar injuries on many battlefields. This allowed systematic medical research with different treatments.
A method based on trial and error is difficult to apply to future situations if there is no comprehension of what actually worked. In medicine, because of the placebo effect, even without knowing what worked, a positive belief alone can produce results. Likewise, without such a belief, even a comprehension of what worked can lead to a treatment failure.
Before the scientific method, research was limited by bias, a reliance on supernatural explanations and premature conclusions, the absence of a scientific community, and the lack of willingness to admit failures.
What was new about the scientific method was that this process was formalized and enhanced by peer reviews from a scientific community in order to reduce bias. Not only can your current peers test your conclusions, but also all future scientists can take your paper from the archives and retest the assumptions. Proper documentation also includes proper citations. A study whose results you have used might turn out to be erroneous. If you have properly cited that study, other scientists can try to correct your work more easily. This openness—having the courage to admit mistakes—is the real driver of scientific progress. Instead of building a knowledge hierarchy of things that are possible to prove, science constructs a knowledge hierarchy where each part openly states that it might be wrong if certain conditions are met. This way, any scientific theory can be proven false by experiment, so any theory building upon other theories always carries with it a whole tree of falsifiable experiments as a prerequisite. In that regard, the start of rigorous application of the scientific method was as significant as the invention of writing. While researchers published books before, there was no process in place to organize this knowledge or access it in a structured manner, like tracing the references back to the original study. With the scientific method, scientists were able to efficiently organize knowledge, and to trust and build upon the results of other scientists for the first time in history.

Related Books and Services

Recommended Further Reading

Razor and shaving brush (source: pexels).
January 21, 2022

Occam's Razor

Any model of reality can be arbitrarily complex. This article discusses one method of how to select between models and theories of different complexity.

About the Author

Clemens Lode

Hello! My name is Clemens and I am based in Düsseldorf, Germany. I’m an author of books on philosophy, science, and project management, and coach people to publish their books and improve their approach to leadership.

I like visiting the gym, learning to sing, observing animals, and creating videos on science and philosophy. I enjoy learning from nature and love the idea of optimizing systems.

In my youth, I was an active chess player reaching the national championship in Germany, and an active pen&paper player leading groups of adventurers on mental journeys. These activities align with my calm approach to moderating meetings, leading meetups, and focusing on details. My personality type in socionics is IEE/ENFp.

Read more...
Clemens Lode

Related Blog Posts

Related Topics

Philosophy

Philosophy

Philosophy is the study of existence, knowledge, values, language, and related topics that have challenged both ordinary and great thinkers throughout human history.

Read more...

Do you have a question about our services?

Reach out, we'd love to hear from you! Schedule a video chat or message us by e-mail, WhatsApp, or Discord!

Send us an e-mail (mail@lode.de).

Reach out to us via WhatsApp.

Or send us your question or comment here and we'll get back to you ASAP:
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Rate us at Trustpilot