the

The Death of the "1800-Calorie ADA Diet"

Irl B. Hirsch
Apr 1, 2002; 20:
Editorials




the

The Potential of Group Visits in Diabetes Care

Andrew M. Davis
Apr 1, 2008; 26:58-62
Feature Articles




the

Clarifying the Role of Insulin in Type 2 Diabetes Management

John R. White
Jan 1, 2003; 21:
Feature Articles




the

Therapeutic Inertia is a Problem for All of Us

Stephen Brunton
Apr 1, 2019; 37:105-106
Editorials




the

Diapression: An Integrated Model for Understanding the Experience of Individuals With Co-Occurring Diabetes and Depression

Paul Ciechanowski
Apr 1, 2011; 29:43-49
Feature Articles




the

Self-Monitoring of Blood Glucose: The Basics

Evan M. Benjamin
Jan 1, 2002; 20:
Practical Pointers




the

Opportunities and Challenges for Biosimilars: What's on the Horizon in the Global Insulin Market?

Lisa S. Rotenstein
Oct 1, 2012; 30:138-150
Features




the

Building Therapeutic Relationships: Choosing Words That Put People First

Jane K. Dickinson
Jan 1, 2017; 35:51-54
Commentary




the

Amylin Replacement With Pramlintide in Type 1 and Type 2 Diabetes: A Physiological Approach to Overcome Barriers With Insulin Therapy

John B. Buse
Jul 1, 2002; 20:
Feature Articles




the

The Disparate Impact of Diabetes on Racial/Ethnic Minority Populations

Edward A. Chow
Jul 1, 2012; 30:130-133
Diabetes Advocacy




the

What's So Tough About Taking Insulin? Addressing the Problem of Psychological Insulin Resistance in Type 2 Diabetes

William H. Polonsky
Jul 1, 2004; 22:147-150
Practical Pointers




the

A Real-World Approach to Insulin Therapy in Primary Care Practice

Irl B. Hirsch
Apr 1, 2005; 23:78-86
Practical Pointers




the

The Heroic Leadership Imperative

Allison, S. T. & Goethals, G. R. (2020). The heroic leadership imperative: How leaders inspire and mobilize change. West Yorkshire: Emerald. Our next book describes a new principle that we call the heroic leadership imperative. We show how leaders who fulfill the imperative will inspire followers and initiate social change.   The imperative consists of … Continue reading The Heroic Leadership Imperative



  • Our latest books on HEROIC LEADERS

the

10 Examples of Heroism Arising From the COVID-19 Pandemic

By Scott T. Allison In any tragedy or crisis, you will see many people standing out and stepping up to save lives and make the world a better place. These heroic individuals can range from leaders of nations to ordinary citizens who rise to the occasion to help others in need. During this COVID-19 pandemic, … Continue reading 10 Examples of Heroism Arising From the COVID-19 Pandemic




the

The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy

By Scott T. Allison Devs is the ideal TV mini-series for people to sink their teeth into, for many reasons: (1) It’s both science and science-fiction; (2) it’s brilliant mix of psychology, philosophy, religion, and technology; (3) it tantalizes us with the mysteries of love, life, death, time, and space; and (4) it features a … Continue reading The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy



  • Commentary and Analysis

the

Heroism Science: Call for Papers, Special Issue: The Heroism of Whistleblowers

Heroism Science: Call for Papers, Special Issue The Heroism of Whistleblowers Edited by Ari Kohen, Brian Riches, and Matt Langdon Whistleblowers speak up with “concerns or information about wrongdoing inside organizations and institutions.” As such, whistleblowing “can be one of the most important and difficult forms of heroism in modern society” (Brown, 2016 p. 1). … Continue reading Heroism Science: Call for Papers, Special Issue: The Heroism of Whistleblowers




the

The Innovation Dilemma

"If it ain't broken, don't fix it."Sound advice, but limited to situations where "fixing it" only entails restoring past performance. In contrast, innovations entail substantive improvements over the past. Innovations are not just corrections of past mistakes, but progress towards a better future.

However, innovations often present a challenging dilemma to decision makers. Many decisions require choosing between options, one of which is both potentially better in the outcome but markedly more uncertain. In these situations the decision maker faces an "innovation dilemma."

The innovation dilemma arises in many contexts. Here are a few examples.

Technology. New and innovative technologies are often advocated because of their purported improvements on existing products or methods. However, what is new is usually less well-known and less widely tested than what is old. The range of possible adverse (or favorable) surprises of an innovative technology may exceed the range of surprise for a tried-and-true technology. The analyst who must choose between innovation and convention faces an innovation dilemma.

Investment. The economic investor faces an innovation dilemma when choosing between investing in a promising but unknown new start-up and investing in a well-known existing firm.

Auction. "Nothing ventured, nothing gained" is the motto of the risk-taker, while the risk-avoider responds: "Nothing ventured, nothing lost". The innovation dilemma is embedded in the choice between these two strategies. Consider for example the "winner's curse" in auction theory. You can make a financial bid for a valuable piece of property, which will be sold to the highest bidder. You have limited information about the other bidders and about the true value of the property. If you bid high you might win the auction but you might also pay more than the property is worth. Not bidding is risk-free because it avoids the purchase. The choice between a high bid and no bid is an innovation dilemma.

Employer decision. An employer must decide whether or not to replace a current satisfactory employee with a new candidate whose score on a standardized test was high. A high score reflects great ability. However, the score also contains a random element, so a high score may result from chance, and not reflect true ability. The innovation dilemma is embedded in the employer's choice between the current adequate employee and a high-scoring new candidate.

Natural resource exploitation. Permitting the extraction of offshore petroleum resources may be productive in terms of petroleum yield but may also present officials with significant uncertainty about environmental consequences.

Public health. Implementation of a large-scale immunization program may present policy officials with worries about uncertain side effects.

Agricultural policy. New technologies promise improved production efficiency or new consumer choices, but with uncertain benefits and costs and potential unanticipated adverse effects resulting from use of manufactured inputs such as fertilizers, pesticides, and machinery, and, more recently, genetically engineered seed varieties and information technology. (I am indebted to L. Joe Moffitt and Craig Osteen for these examples in natural resources, public health and agriculture.)

An essay like this one should - according to custom - end with a practical prescription: What to do about the innovation dilemma? You need to make a decision - a choice between options - and you face an innovation dilemma. How to choose? All I'll say is that the first step is to identify what you need to achieve from this decision. Recognizing the vast uncertainties which accompany the decision, choose the option which achieves the required outcome over the largest range of uncertain contingencies.

If you want more of an answer than that, consult your favorite decision theory (like info-gap theory, for instance).

I will conclude by drawing a parallel between the innovation dilemma and one of the oldest quandaries in political philosophy. In The Evolution of Political Thought C. Northcote Parkinson explains the historically recurring tension between freedom and equality.

Freedom. People have widely varying interests and aptitudes. Hence a society that offers broad freedom for individuals to exploit their abilities, will also develop a wide spread of wealth, accomplishment, and status. Freedom enables individuals to explore, invent, discover, and create. Freedom is the recipe for innovation. Freedom induces both uncertainty and inequality.

Equality. People have widely varying interests and aptitudes. Hence a society that strives for equality among its members can achieve this by enforcing conformity and by transferring wealth from rich to poor. The promise of a measure of equality is a guarantee of a measure of security, a personal and social safety net. Equality reduces both uncertainty and freedom.

The dilemma is that a life without freedom is hardly human, but freedom without security is the jungle. And life in the jungle, as Hobbs explained, in "solitary, poor, nasty, brutish and short".




the

The Pains of Progress

To measure time by how little we change is to find how little we've lived, 
but to measure time by how much we've lost is to wish we hadn't changed at all. Andre Aciman

The last frontier is not the Antarctic, or the oceans, or outer space. The last frontier is The Unknown. We mentioned in an earlier essay that uncertainty - which makes baseball and life interesting - is inevitable in the human world. Life will continue to be interesting as long as the world is rich in unknowns, waiting to be discovered. Progress is possible if propitious discoveries can be made. Progress, however, comes with costs.

The emblem of my university entwines a billowing smokestack and a cogwheel in the first letter of the institution's name. When this emblem was adopted (probably in 1951) these were optimistic symbols of progress. Cogwheels are no longer 'hi-tech' (though we still need them), and smoke has been banished from polite company. But our emblem is characteristic of industrial society which has seared Progress on our hearts and minds.

Progress is accompanied by painful tensions. On the one hand, progress is nurtured by stability, cooperation, and leisure. On the other hand, progress grows out of change, conflict, and stress. A society's progressiveness reflects its balance of each of these three pairs of attributes. In the most general terms, progressiveness reflects social and individual attitudes to uncertainty.

Let's consider the three pairs of attributes one at a time.

Change and stability. Not all change is progress, but all progress is change. Change is necessary for progress, by definition, and progress can be very disruptive. The disruptiveness sometimes arises from unexpected consequences. J.B.S. Haldane wrote in 1923 that "the late war is only an example of the disruptive result that we may constantly expect from the progress of science." On the other hand, progressives employ and build on existing capabilities. The entrepreneur depends on stable property rights before risking venture capital. The existing legal system is used to remove social injustice. Watt's steam engine extended Newcomen's more primitive model. The new building going up on campus next to my office is very disruptive, but the construction project depends on the continuity of the university despite the drilling and dust. Even revolutionaries exploit and react against the status quo, which must exist for a revolutionary to be able to revolt. (One can't revolt if nothing is revolting.) Progress grows from a patch of opportunity in a broad bed of certainty, and spreads out in unanticipated directions.

Conflict and cooperation. Conflict between vested interests and innovators is common. Watt protected his inventions with extensive patents which may have actually retarded the further development and commercialization of steam power. Conflict is also a mechanism for selecting successful ideas. Darwinian evolution and its social analogies proceed by more successful adaptations replacing less successful ones. On the other hand, cooperation enables specialization and expertise which are needed for innovation. The tool-maker cooperates with the farmer so better tools can be made more quickly, enhancing the farmer's productivity and the artisan's welfare. Conflicts arise over what constitutes progress. Stem cell research, genetic engineering, nuclear power technology: progress or plague? Cooperative collective decision making enables the constructive resolution of these value-based conflicts.

Stress and leisure. Challenge, necessity and stress all motivate innovation. If you have no problems, you are unlikely to be looking for solutions. On the other hand, the leisure to think and tinker is a great source of innovation. Subsistence societies have no resources for invention. In assessing the implications of industrial efficiency, Bertrand Russell praised idleness in 1932, writing: "In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving ...." Stress is magnified by the unknown consequences of the stressor, while leisure is possible only in the absence of fear.

New replaces Old. Yin and yang are complementary opposites that dynamically interact. In Hegel's dialectic, tension between contradictions is resolved by synthesis. Human history is written by the victors, who sometimes hardly mention those swept into Trotsky's "dustbin of history". "In the evening resides weeping; in the morning: joy." (Psalm 30:6). Change and stability; conflict and cooperation; stress and leisure.

No progress without innovation; no innovation without discovery; no discovery without the unknown; no unknown without fear. There is no progress without pain.



  • change and stability
  • conflict and cooperation
  • costs of progress
  • progress
  • stress and leisure

the

Beware the Rareness Illusion When Exploring the Unknown

Here's a great vacation idea. Spend the summer roaming the world in search of the 10 lost tribes of Israel, exiled from Samaria by the Assyrians 2700 years ago (2 Kings 17:6). Or perhaps you'd like to search for Prester John, the virtuous ruler of a kingdom lost in the Orient? Or would you rather trace the gold-laden kingdom of Ophir (1 Kings 9:28)? Or do you prefer the excitement of tracking the Amazons, that nation of female warriors? Or perhaps the naval power mentioned by Plato, operating from the island of Atlantis? Or how about unicorns, or the fountain of eternal youth? The Unknown is so vast that the possibilities are endless.

Maybe you don't believe in unicorns. But Plato evidently "knew" about the island of Atlantis. The conquest of Israel is known from Assyrian archeology and from the Bible. That you've never seen a Reubenite or a Naphtalite (or a unicorn) means that they don't exist?

It is true that when something really does not exist, one might spend a long time futilely looking for it. Many people have spent enormous energy searching for lost tribes, lost gold, and lost kingdoms. Why is it so difficult to decide that what you're looking for really isn't there? The answer, ironically, is that the world has endless possibilities for discovery and surprise.

Let's skip vacation plans and consider some real-life searches. How long should you (or the Libyans) look for Muammar Qaddafi? If he's not in the town of Surt, maybe he's Bani Walid, or Algeria, or Timbuktu? How do you decide he cannot be found? Maybe he was pulverized by a NATO bomb. It's urgent to find the suicide bomber in the crowded bus station before it's too late - if he's really there. You'd like to discover a cure for AIDS, or a method to halt the rising global temperature, or a golden investment opportunity in an emerging market, or a proof of the parallel postulate of Euclidean geometry.

Let's focus our question. Suppose you are looking for something, and so far you have only "negative" evidence: it's not here, it's not there, it's not anywhere you've looked. Why is it so difficult to decide, conclusively and confidently, that it simply does not exist?

This question is linked to a different question: how to make the decision that "it" (whatever it is) does not exist. We will focus on the "why" question, and leave the "how" question to students of decision theories such as statistics, fuzzy logic, possibility theory, Dempster-Shafer theory and info-gap theory. (If you're interested in an info-gap application to statistics, here is an example.)

Answers to the "why" question can be found in several domains.

Psychology provides some answers. People can be very goal oriented, stubborn, and persistent. Marco Polo didn't get to China on a 10-hour plane flight. The round trip took him 24 years, and he didn't travel business class.

Ideology is a very strong motivator. When people believe something strongly, it is easy for them to ignore evidence to the contrary. Furthermore, for some people, the search itself is valued more than the putative goal.

The answer to the "why" question that I will focus on is found by contemplating The Endless Unknown. It is so vast, so unstructured, so, well ..., unknown, that we cannot calibrate our negative evidence to decide that whatever we're looking for just ain't there.

I'll tell a true story.

I was born in the US and my wife was born in Israel, but our life-paths crossed, so to speak, before we were born. She had a friend whose father was from Europe and lived for a while - before the friend was born - with a cousin of his in my home town. This cousin was - years later - my 3rd grade teacher. My school teacher was my future wife's friend's father's cousin.

Amazing coincidence. This convoluted sequence of events is certainly rare. How many of you can tell the very same story? But wait a minute. This convoluted string of events could have evolved in many many different ways, each of which would have been an equally amazing coincidence. The number of similar possible paths is namelessly enormous, uncountably humongous. In other words, potential "rare" events are very numerous. Now that sounds like a contradiction (we're getting close to some of Zeno's paradoxes, and Aristotle thought Zeno was crazy). It is not a contradiction; it is only a "rareness illusion" (something like an optical illusion). The specific event sequence in my story is unique, which is the ultimate rarity. We view this sequence as an amazing coincidence because we cannot assess the number of similar sequences. Surprising strings of events occur not infrequently because the number of possible surprising strings is so unimaginably vast. The rareness illusion is the impression of rareness arising from our necessary ignorance of the vast unknown. "Necessary" because, by definition, we cannot know what is unknown. "Vast" because the world is so rich in possibilities.

The rareness illusion is a false impression, a mistake. For instance, it leads people to wrongly goggle at strings of events - rare in themselves - even though "rare events" are numerous and "amazing coincidences" occur all the time. An appreciation of the richness and boundlessness of the Unknown is an antidote for the rareness illusion.

Recognition of the rareness illusion is the key to understanding why it is so difficult to confidently decide, based on negative evidence, that what you're looking for simply does not exist.

One might be inclined to reason as follows. If you're looking for something, then look very thoroughly, and if you don't find it, then it's not there. That is usually sound and sensible advice, and often "looking thoroughly" will lead to discovery.

However, the number of ways that we could overlook something that really is there is enormous. It is thus very difficult to confidently conclude that the search was thorough and that the object cannot be found. Take the case of your missing house keys. They dropped from your pocket in the car, or on the sidewalk and somebody picked them up, or you left them in the lock when you left the house, or or or .... Familiarity with the rareness illusion makes it very difficult to decide that you have searched thoroughly. If you think that the only contingencies not yet explored are too exotic to be relevant (a raven snatched them while you were daydreaming about that enchanting new employee), then think again, because you've been blinded by a rareness illusion. The number of such possibilities is so vastly unfathomable that you cannot confidently say that all of them are collectively negligible. Recognition of the rareness illusion prevents you from confidently concluding that what you are seeking simply does not exist.

Many quantitative tools grapple with the rareness illusion. We mentioned some decision theories earlier. But because the rareness illusion derives from our necessary ignorance of the vast unknown, one must always beware.

Looking for an exciting vacation? The Endless Unknown is the place to go. 




the

The End of Science?


Science is the search for and study of patterns and laws in the natural and physical worlds. Could that search become exhausted, like an over-worked coal vein, leaving nothing more to be found? Could science end? After briefly touching on several fairly obvious possible end-games for science, we explore how the vast Unknown could undermine - rather than underlie - the scientific enterprize. The possibility that science could end is linked to the reason that science is possible at all. The path we must climb in this essay is steep, but the (in)sight is worth it.

Science is the process of discovering unknowns, one of which is the extent of Nature's secrets. It is possible that the inventory of Nature's unknowns is finite or conceivably even nearly empty. However, a look at open problems in science, from astronomy to zoology, suggests that Nature's storehouse of surprises is still chock full. So, from this perspective, the answer to the question 'Could science end?' is conceivably 'Yes', but most probably 'No'.

Another possible 'Yes' answer is that science will end by reaching the limit of human cognitive capability. Nature's storehouse of surprises may never empty out, but the rate of our discoveries may gradually fall, reaching zero when scientists have figured out everything that humans are able to understand. Possible, but judging from the last 400 years, it seems that we've only begun to tap our mind's expansive capability.

Or perhaps science - a product of human civilization - will end due to historical or social forces. The simplest such scenario is that we blow ourselves to smithereens. Smithereens can't do science. Another more complicated scenario is Oswald Spengler's theory of cyclical history, whereby an advanced society - such as Western civilization - decays and disappears, science disappearing with it. So again a tentative 'Yes'. But this might only be an interruption of science if later civilizations resume the search.

We now explore the main mechanism by which science could become impossible. This will lead to deeper understanding of the delicate relation between knowledge and the Unknown and to why science is possible at all.

One axiom of science is that there exist stable and discoverable laws of nature. As the philosopher A.N. Whitehead wrote in 1925: "Apart from recurrence, knowledge would be impossible; for nothing could be referred to our past experience. Also, apart from some regularity of recurrence, measurement would be impossible." (Science and the Modern World, p.36). The stability of phenomena is what allows a scientist to repeat, study and build upon the work of other scientists. Without regular recurrence there would be no such thing as a discoverable law of nature.

However, as David Hume explained long ago in An Enquiry Concerning Human Understanding, one can never empirically prove that regular recurrence will hold in the future. By the time one tests the regularity of the future, that future has become the past. The future can never be tested, just as one can never step on the rolled up part of an endless rug unfurling always in front of you.

Suppose the axiom of Natural Law turns out to be wrong, or suppose Nature comes unstuck and its laws start "sliding around", changing. Science would end. If regularity, patterns, and laws no longer exist, then scientific pursuit of them becomes fruitless.

Or maybe not. Couldn't scientists search for the laws by which Nature "slides around"? Quantum mechanics seems to do just that. For instance, when a polarized photon impinges on a polarizing crystal, the photon will either be entirely absorbed or entirely transmitted, as Dirac explained. The photon's fate is not determined by any law of Nature (if you believe quantum mechanics). Nature is indeterminate in this situation. Nonetheless, quantum theory very accurately predicts the probability that the photon will be transmitted, and the probability that it will be absorbed. In other words, quantum mechanics establishes a deterministic law describing Nature's indeterminism.

Suppose Nature's indeterminism itself becomes lawless. Is that conceivable? Could Nature become so disorderly, so confused and uncertain, so "out of joint: O, cursed spite", that no law can "set it right"? The answer is conceivably 'Yes', and if this happens then scientists are all out of a job. To understand how this is conceivable, one must appreciate the Unknown at its most rambunctious.

Let's take stock. We can identify attributes of Nature that are necessary for science to be possible. The axiom of Natural Law is one necessary attribute. The successful history of science suggests that the axiom of Natural Law has held firmly in the past. But that does not determine what Nature will be in the future.

In order to understand how Natural Law could come unstuck, we need to understand how Natural Law works (today). When a projectile, say a baseball, is thrown from here to there, its progress at each point along its trajectory is described, scientifically, in terms of its current position, direction of motion, and attributes such as its shape, mass and surrounding medium. The Laws of Nature enable the calculation of the ball's progress by solving a mathematical equation whose starting point is the current state of the ball.

We can roughly describe most Laws of Nature as formulations of problems - e.g. mathematical equations - whose input is the current and past states of the system in question, and whose solution predicts an outcome: the next state of the system. What is law-like about this is that these problems - whose solution describes a progression, like the flight of a baseball - are constant over time. The scientist calculates the baseball's trajectory by solving the same problem over and over again (or all at once with a differential equation). Sometimes the problem is hard to solve, so scientists are good mathematicians, or they have big computers, (or both). But solvable they are.

Let's remember that Nature is not a scientist, and Nature does not solve a problem when things happen (like baseballs speeding to home plate). Nature just does it. The scientist's Law is a description of Nature, not Nature itself.

There are other Laws of Nature for which we must modify the previous description. In these cases, the Law of Nature is, as before, the formulation of a problem. Now, however, the solution of the problem not only predicts the next state of the system, but it also re-formulates the problem that must be solved at the next step. There is sort of a feedback: the next state of the system alters the rule by which subsequent progress is made. For instance, when an object falls towards earth from outer space, the law of nature that determines the motion of the object depends on the gravitational attraction. The gravitational attraction, in turn, increases as the object gets closer. Thus the problem to be solved changes as the object moves. Problems like these tend to be more difficult to solve, but that's the scientist's problem (or pleasure).

Now we can appreciate how Nature might become lawlessly unstuck. Let's consider the second type of Natural Law, where the problem - the Law itself - gets modified by the evolving event. Let's furthermore suppose that the problem is not simply difficult to solve, but that no solution can be obtained in a finite amount of time (mathematicians have lots of examples of problems like this). As before, Nature itself does not solve a problem; Nature just does it. But the scientist is now in the position that no prediction can be made, no trajectory can be calculated, no model or description of the phenomenon can be obtained. No explicit problem statement embodying a Natural Law exists. This is because the problem to be solved evolves continuously from previous solutions, and none of the sequence of problems can be solved. The scientist's profession will become frustrating, futile and fruitless.

Nature becomes lawlessly unstuck, and science ends, if all Laws of Nature become of the modified second type. The world itself will continue because Nature solves no problems, it just does its thing. But the way it does this is now so raw and unruly that no study of nature can get to first base.

Sound like science fiction (or nightmare)? Maybe. But as far as we know, the only thing between us and this new state of affairs is the axiom of Natural Law. Scientists assume that Laws exist and are stable because past experience, together with our psychological makeup (which itself is evolutionary past experience), very strongly suggests that regular recurrence can be relied upon. But if you think that the scientists can empirically prove that the future will continue to be lawful, like the past, recall that all experience is past experience. Recall the unfurling-rug metaphor (by the time we test the future it becomes the past), and make an appointment to see Mr Hume.

Is science likely to become fruitless or boring? No. Science thrives on an Unknown that is full of surprises. Science - the search for Natural Laws - thrives even though the existence of Natural Law can never be proven. Science thrives precisely because we can never know for sure that science will not someday end. 




the

The Language of Science and the Tower of Babel


And God said: Behold one people with one language for them all ... and now nothing that they venture will be kept from them. ... [And] there God mixed up the language of all the land. (Genesis, 11:6-9)

"Philosophy is written in this grand book the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read the alphabet in which it is composed. It is written in the language of mathematics." Galileo Galilei

Language is power over the unknown. 

Mathematics is the language of science, and computation is the modern voice in which this language is spoken. Scientists and engineers explore the book of nature with computer simulations of swirling galaxies and colliding atoms, crashing cars and wind-swept buildings. The wonders of nature and the powers of technological innovation are displayed on computer screens, "continually open to our gaze." The language of science empowers us to dispel confusion and uncertainty, but only with great effort do we change the babble of sounds and symbols into useful, meaningful and reliable communication. How we do that depends on the type of uncertainty against which the language struggles.

Mathematical equations encode our understanding of nature, and Galileo exhorts us to learn this code. One challenge here is that a single equation represents an infinity of situations. For instance, the equation describing a flowing liquid captures water gushing from a pipe, blood coursing in our veins, and a droplet splashing from a puddle. Gazing at the equation is not at all like gazing at the droplet. Understanding grows by exposure to pictures and examples. Computations provide numerical examples of equations that can be realized as pictures. Computations can simulate nature, allowing us to explore at our leisure.

Two questions face the user of computations: Are we calculating the correct equations? Are we calculating the equations correctly? The first question expresses the scientist's ignorance - or at least uncertainty - about how the world works. The second question reflects the programmer's ignorance or uncertainty about the faithfulness of the computer program to the equations. Both questions deal with the fidelity between two entities. However, the entities involved are very different and the uncertainties are very different as well.

The scientist's uncertainty is reduced by the ingenuity of the experimenter. Equations make predictions that can be tested by experiment. For instance, Galileo predicted that small and large balls will fall at the same rate, as he is reported to have tested from the tower of Pisa. Equations are rejected or modified when their predictions don't match the experimenter's observation. The scientist's uncertainty and ignorance are whittled away by testing equations against observation of the real world. Experiments may be extraordinarily subtle or difficult or costly because nature's unknown is so endlessly rich in possibilities. Nonetheless, observation of nature remorselessly cuts false equations from the body of scientific doctrine. God speaks through nature, as it were, and "the Eternal of Israel does not deceive or console." (1 Samuel, 15:29). When this observational cutting and chopping is (temporarily) halted, the remaining equations are said to be "validated" (but they remain on the chopping block for further testing).

The programmer's life is, in one sense, more difficult than the experimenter's. Imagine a huge computer program containing millions of lines of code, the accumulated fruit of thousands of hours of effort by many people. How do we verify that this computation faithfully reflects the equations that have ostensibly been programmed? Of course they've been checked again and again for typos or logical faults or syntactic errors. Very clever methods are available for code verification. Nonetheless, programmers are only human, and some infidelity may slip through. What remorseless knife does the programmer have with which to verify that the equations are correctly calculated? Testing computation against observation does not allow us to distinguish between errors in the equations, errors in the program, and compensatory errors in both.

The experimenter compares an equation's prediction against an observation of nature. Like the experimenter, the programmer compares the computation against something. However, for the programmer, the sharp knife of nature is not available. In special cases the programmer can compare against a known answer. More frequently the programmer must compare against other computations which have already been verified (by some earlier comparison). The verification of a computation - as distinct from the validation of an equation - can only use other high-level human-made results. The programmer's comparisons can only be traced back to other comparisons. It is true that the experimenter's tests are intermediated by human artifacts like calipers or cyclotrons. Nonetheless, bedrock for the experimenter is the "reality out there". The experimenter's tests can be traced back to observations of elementary real events. The programmer does not have that recourse. One might say that God speaks to the experimenter through nature, but the programmer has no such Voice upon which to rely.

The tower built of old would have reached the heavens because of the power of language. That tower was never completed because God turned talk into babble and dispersed the people across the land. Scholars have argued whether the story prescribes a moral norm, or simply describes the way things are, but the power of language has never been disputed.

The tower was never completed, just as science, it seems, has a long way to go. Genius, said Edison, is 1 percent inspiration and 99 percent perspiration. A good part of the sweat comes from getting the language right, whether mathematical equations or computer programs.

Part of the challenge is finding order in nature's bubbling variety. Each equation captures a glimpse of that order, adding one block to the structure of science. Furthermore, equations must be validated, which is only a stop-gap. All blocks crumble eventually, and all equations are fallible and likely to be falsified.

Another challenge in science and engineering is grasping the myriad implications that are distilled into an equation. An equation compresses and summarizes, while computer simulations go the other way, restoring detail and specificity. The fidelity of a simulation to the equation is usually verified by comparing against other simulations. This is like the dictionary paradox: using words to define words.

It is by inventing and exploiting symbols that humans have constructed an orderly world out of the confusing tumult of experience. With symbols, like with blocks in the tower, the sky is the limit.




the

Picking a Theory is Like Building a Boat at Sea


"We are like sailors who on the open sea must reconstruct their ship
 but are never able to start afresh from the bottom." 
Otto Neurath's analogy in the words of Willard V. Quine

Engineers, economists, social planners, security strategists, and others base their plans and decisions on theories. They often argue long and hard over which theory to use. Is it ever right to use a theory that we know is empirically wrong, especially if a true (or truer) theory is available? Why is it so difficult to pick a theory?

Let's consider two introductory examples.

You are an engineer designing a robot. You must calculate the forces needed to achieve specified motions of the robotic arms. You can base these calculations on either of two theories. One theory assumes that an object comes to rest unless a force acts upon it. Let's call this axiom A. The other theory assumes that an object moves at constant speed unless a force acts upon it. Let's call this axiom G. Axiom A agrees with observation: Nothing moves continuously without the exertion of force; an object will come to rest unless you keep pushing it. Axiom G contradicts all observation; no experiment illustrates the perpetual motion postulated by the axiom. If all else is the same, which theory should you choose?

Axiom A is Aristotle's law of inertia, which contributed little to the development of mechanical dynamics. Axiom G is Galileo's law of inertia: one of the most fruitful scientific ideas of all time. Why is an undemonstrable assertion - axiom G - a good starting point for a theory?

Consider another example.

You are an economist designing a market-based policy to induce firms to reduce pollution. You will use an economic theory to choose between policies. One theory assumes that firms face pure competition, meaning that no single firm can influence market prices. Another theory provides agent-based game-theoretic characterization of how firms interact (without colluding) by observing and responding to price behavior of other firms and of consumers.

Pure competition is a stylized idealization (like axiom G). Game theory is much more realistic (like axiom A), but may obscure essential patterns in its massive detail. Which theory should you use?

We will not address the question of how to choose a theory upon which to base a decision. We will focus on the question: why is theory selection so difficult? We will discuss four trade offs.

"Thanks to the negation sign, there are as many truths as falsehoods;
we just can't always be sure which are which." Willard V. Quine

The tension between right and right. The number of possible theories is infinite, and sometimes it's hard to separate the wheat from the chaff, as suggested by the quote from Quine. As an example, I have a book called A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought by Snowdon, Vane and Wynarczyk. It's a wonderful overview of about a dozen theories developed by leading economic scholars, many of them Nobel Prize Laureates. The theories are all fundamentally different. They use different axioms and concepts and they compete for adoption by economists. These theories have been studied and tested upside down and backwards. However, economic processes are very complex and variable, and the various theories succeed in different ways or in different situations, so the jury is still out. The choice of a theory is no simple matter because many different theories can all seem right in one way or another.

"The fox knows many things, but the hedgehog knows one big thing." Archilochus

The fox-hedgehog tension. This aphorism by Archilochus metaphorically describes two types of theories (and two types of people). Fox-like theories are comprehensive and include all relevant aspects of the problem. Hedgehog-like theories, in contrast, skip the details and focus on essentials. Axiom A is fox-like because the complications of friction are acknowledged from the start. Axiom G is hedgehog-like because inertial resistance to change is acknowledged but the complications of friction are left for later. It is difficult to choose between these types of theories because it is difficult to balance comprehensiveness against essentialism. On the one hand, all relevant aspects of the problem should be considered. On the other hand, don't get bogged down in endless details. This fox-hedgehog tension can be managed by weighing the context, goals and implications of the decision. We won't expand on this idea since we're not considering how to choose a theory; we're only examining why it's a difficult choice. However, the idea of resolving this tension by goal-directed choice motivates the third tension.

"Beyond this island of meanings which in their own nature are true or false
lies the ocean of meanings to which truth and falsity are irrelevant." John Dewey

The truth-meaning tension. Theories are collections of statements like axioms A and G in our first example. Statements carry meaning, and statements can be either true or false. Truth and meaning are different. For instance, "Archilochus was a Japanese belly dancer" has meaning, but is not true. The quote from Dewey expresses the idea that "meaning" is a broader description of statements than "truth". All true statements mean something, but not all meaningful statements are true. That does not imply, however, that all untrue meaningful statements are false, as we will see.

We know the meanings of words and sentences from experience with language and life. A child learns the meanings of words - chair, mom, love, good, bad - by experience. Meanings are learned by pointing - this is a chair - and also by experiencing what it means to love or to be good or bad.

Truth is a different concept. John Dewey wrote that

"truths are but one class of meanings, namely, those in which a claim to verifiability by their consequences is an intrinsic part of their meaning. Beyond this island of meanings which in their own nature are true or false lies the ocean of meanings to which truth and falsity are irrelevant. We do not inquire whether Greek civilization was true or false, but we are immensely concerned to penetrate its meaning."

A true statement, in Dewey's sense, is one that can be confirmed by experience. Many statements are meaningful, even important and useful, but neither true nor false in this experimental sense. Axiom G is an example.

Our quest is to understand why the selection of a theory is difficult. Part of the challenge derives from the tension between meaning and truth. We select a theory for use in formulating and evaluating a plan or decision. The decision has implications: what would it mean to do this rather than that? Hence it is important that the meaning of the theory fit the context of the decision. Indeed, hedgehogs would say that getting the meaning and implication right is the essence of good decision making.

But what if a relevantly meaningful theory is unprovable or even false? Should we use a theory that is meaningful but not verifiable by experience? Should we use a meaningful theory that is even wrong? This quandary is related to the fox-hedgehog tension because the fox's theory is so full of true statements that its meaning may be obscured, while the hedgehog's bare-bones theory has clear relevance to the decision to be made, but may be either false or too idealized to be tested.

Galileo's axiom of inertia is an idealization that is unsupported by experience because friction can never be avoided. Axiom G assumes conditions that cannot be realized so the axiom can never be tested. Likewise, pure competition is an idealization that is rarely if ever encountered in practice. But these theories capture the essence of many situations. In practical terms, what it means to get the robotic arm from here to there is to apply net forces that overcome Galilean inertia. But actually designing a robot requires considering details of dissipative forces like friction. What it means to be a small business is that the market price of your product is beyond your control. But actually running a business requires following and reacting to prices in the store next door.

It is difficult to choose between a relevantly meaningful but unverifiable theory, and a true theory that is perhaps not quite what we mean.

The knowledge-ignorance tension. Recall that we are discussing theories in the service of decision-making by engineers, social scientists and others. A theory should facilitate the use of our knowledge and understanding. However, in some situations our ignorance is vast and our knowledge will grow. Hence a theory should also account for ignorance and be able to accommodate new knowledge.

Let's take an example from theories of decision. The independence axiom is fundamental in various decision theories, for instance in von Neumann-Morgenstern expected utility theory. It says that one's choices should be independent of irrelevant alternatives. Suppose you are offered the dinner choice between chicken and fish, and you choose chicken. The server returns a few minutes later saying that beef is also available. If you switch your choice from chicken to fish you are violating the independence axiom. You prefer beef less than both chicken and fish, so the beef option shouldn't alter the fish-chicken preference.

But let's suppose that when the server returned and mentioned beef, your physician advised you to reduce your cholesterol intake (so your preference for beef is lowest) which prompted your wife to say that you should eat fish at least twice a week because of vitamins in the oil. So you switch from chicken to fish. Beef is not chosen, but new information that resulted from introducing the irrelevant alternative has altered the chicken-fish preference.

One could argue for the independence axiom by saying that it applies only when all relevant information (like considerations of cholesterol and fish oil) are taken into account. On the other hand, one can argue against the independence axiom by saying that new relevant information quite often surfaces unexpectedly. The difficulty is to judge the extent to which ignorance and the emergence of new knowledge should be central in a decision theory.

Wrapping up. Theories express our knowledge and understanding about the unknown and confusing world. Knowledge begets knowledge. We use knowledge and understanding - that is, theory - in choosing a theory. The process is difficult because it's like building a boat on the open sea as Otto Neurath once said. 




the

Jabberwocky. Or: Grand Unified Theory of Uncertainty???


Jabberwocky, Lewis Carroll's whimsical nonsense poem, uses made-up words to create an atmosphere and to tell a story. "Billig", "frumious", "vorpal" and "uffish" have no lexical meaning, but they could have. The poem demonstrates that the realm of imagination exceeds the bounds of reality just as the set of possible words and meanings exceeds its real lexical counterpart.

Uncertainty thrives in the realm of imagination, incongruity, and contradiction. Uncertainty falls in the realm of science fiction as much as in the realm of science. People have struggled with uncertainty for ages and many theories of uncertainty have appeared over time. How many uncertainty theories do we need? Lots, and forever. Would we say that of physics? No, at least not forever.

Can you think inconsistent, incoherent, or erroneous thoughts? I can. (I do it quite often, usually without noticing.) For those unaccustomed to thinking incongruous thoughts, and who need a bit of help to get started, I can recommend thinking of "two meanings packed into one word like a portmanteau," like 'fuming' and 'furious' to get 'frumious' or 'snake' and 'shark' to get 'snark'.

Portmanteau words are a start. Our task now is portmanteau thoughts. Take for instance the idea of a 'thingk':

When I think a thing I've thought,
I have often felt I ought
To call this thing I think a "Thingk",
Which ought to save a lot of ink.

The participle is written "thingking",
(Which is where we save on inking,)
Because "thingking" says in just one word:
"Thinking of a thought thing." Absurd!

All this shows high-power abstraction.
(That highly touted human contraption.)
Using symbols with subtle feint,
To stand for something which they ain't.

Now that wasn't difficult: two thoughts at once. Now let those thoughts be contradictory. To use a prosaic example: thinking the unthinkable, which I suppose is 'unthingkable'. There! You did it. You are on your way to a rich and full life of thinking incongruities, fallacies and contradictions. We can hold in our minds thoughts of 4-sided triangles, parallel lines that intersect, and endless other seeming impossibilities from super-girls like Pippi Longstockings to life on Mars (some of which may actually be true, or at least possible).

Scientists, logicians, and saints are in the business of dispelling all such incongruities, errors and contradictions. Banishing inconsistency is possible in science because (or if) there is only one coherent world. Belief in one coherent world and one grand unified theory is the modern secular version of the ancient monotheistic intuition of one universal God (in which saints tend to believe). Uncertainty thrives in the realm in which scientists and saints have not yet completed their tasks (perhaps because they are incompletable). For instance, we must entertain a wide range of conflicting conceptions when we do not yet know how (or whether) quantum mechanics can be reconciled with general relativity, or Pippi's strength reconciled with the limitations of physiology. As Henry Adams wrote:

"Images are not arguments, rarely even lead to proof, but the mind craves them, and, of late more than ever, the keenest experimenters find twenty images better than one, especially if contradictory; since the human mind has already learned to deal in contradictions."

The very idea of a rigorously logical theory of uncertainty is startling and implausible because the realm of the uncertain is inherently incoherent and contradictory. Indeed, the first uncertainty theory - probability - emerged many centuries after the invention of the axiomatic method in mathematics. Today we have many theories of uncertainty: probability, imprecise probability, information theory, generalized information theory, fuzzy logic, Dempster-Shafer theory, info-gap theory, and more (the list is a bit uncertain). Why such a long and diverse list? It seems that in constructing a logically consistent theory of the logically inconsistent domain of uncertainty, one cannot capture the whole beast all at once (though I'm uncertain about this).

A theory, in order to be scientific, must exclude something. A scientific theory makes statements such as "This happens; that doesn't happen." Karl Popper explained that a scientific theory must contain statements that are at risk of being wrong, statements that could be falsified. Deborah Mayo demonstrated how science grows by discovering and recovering from error.

The realm of uncertainty contains contradictions (ostensible or real) such as the pair of statements: "Nine year old girls can lift horses" and "Muscle fiber generates tension through the action of actin and myosin cross-bridge cycling". A logically consistent theory of uncertainty can handle improbabilities, as can scientific theories like quantum mechanics. But a logical theory cannot encompass outright contradictions. Science investigates a domain: the natural and physical worlds. Those worlds, by virtue of their existence, are perhaps coherent in a way that can be reflected in a unified logical theory. Theories of uncertainty are directed at a larger domain: the natural and physical worlds and all imaginable (and unimaginable) other worlds. That larger domain is definitely not coherent, and a unified logical theory would seem to be unattainable. Hence many theories of uncertainty are needed.

Scientific theories are good to have, and we do well to encourage the scientists. But it is a mistake to think that the scientific paradigm is suitable to all domains, in particular, to the study of uncertainty. Logic is a powerful tool and the axiomatic method assures the logical consistency of a theory. For instance, Leonard Savage argued that personal probability is a "code of consistency" for choosing one's behavior. Jim March compares the rigorous logic of mathematical theories of decision to strict religious morality. Consistency between values and actions is commendable says March, but he notes that one sometimes needs to deviate from perfect morality. While "[s]tandard notions of intelligent choice are theories of strict morality ... saints are a luxury to be encouraged only in small numbers." Logical consistency is a merit of any single theory, including a theory of uncertainty. However, insisting that the same logical consistency apply over the entire domain of uncertainty is like asking reality and saintliness to make peace.




the

The Age of Imagination


This is not only the Age of Information, this is also the Age of Imagination. Information, at any point in time, is bounded, while imagination is always unbounded. We are overwhelmed more by the potential for new ideas than by the admittedly vast existing knowledge. We are drunk with the excitement of the unknown. Drunks are sometimes not a pretty sight; Isaiah (28:8) is very graphic.

It is true that topical specialization occurs, in part, due to what we proudly call the explosion of knowledge. There is so much to know that one must ignore huge tracts of knowledge. But that is only half the story. The other half is that we have begun to discover the unknown, and its lure is irresistible. Like the scientific and global explorers of the early modern period - The Discoverers as Boorstin calls them - we are intoxicated by the potential "out there", beyond the horizon, beyond the known. That intoxication can distort our vision and judgment.

Consider Reuven's comment, from long experience, that "Engineers use formulas and various equations without being aware of the theories behind them." A pithier version was said to me by an acquisitions editor at Oxford University Press: "Engineers don't read books." She should know.

Engineers are imaginative and curious. They are seekers, and they find wonderful things. But they are too engrossed in inventing and building The New, to be much engaged with The Old. "Scholarship", wrote Thorstein Veblen is "an intimate and systematic familiarity with past cultural achievements." Engineers - even research engineers and professors of engineering - spend very little time with past masters. How many computer scientists scour the works of Charles Babbage? How often do thermal engineers study the writings of Lord Kelvin? A distinguished professor of engineering, himself a member of the US National Academy of Engineering, once told me that there is little use for journal articles more than a few years old.

Fragmentation of knowledge results from the endless potential for new knowledge. Seekers - engineers and the scientists of nature, society and humanity - move inexorably apart from one another. But nonetheless it's all connected; consilient. Technology alters how we live. Science alters what we think. How can we keep track of it all? How can we have some at least vague and preliminary sense of where we are heading and whether we value the prospect?

The first prescription is to be aware of the problem, and I greatly fear that many movers and shakers of the modern age are unaware. The second prescription is to identify who should take the lead in nurturing this awareness. That's easy: teachers, scholars, novelists, intellectuals of all sorts.

Isaiah struggled with this long ago. "Priest and prophet erred with liquor, were swallowed by wine."(Isaiah, 28:7) We are drunk with the excitement of the unknown. Who can show the way?




the

We're Just Getting Started: A Glimpse at the History of Uncertainty


We've had our cerebral cortex for several tens of thousands of years. We've lived in more or less sedentary settlements and produced excess food for 7 or 8 thousand years. We've written down our thoughts for roughly 5 thousand years. And Science? The ancient Greeks had some, but science and its systematic application are overwhelmingly a European invention of the past 500 years. We can be proud of our accomplishments (quantum theory, polio vaccine, powered machines), and we should worry about our destructive capabilities (atomic, biological and chemical weapons). But it is quite plausible, as Koestler suggests, that we've only just begun to discover our cerebral capabilities. It is more than just plausible that the mysteries of the universe are still largely hidden from us. As evidence, consider the fact that the main theories of physics - general relativity, quantum mechanics, statistical mechanics, thermodynamics - are still not unified. And it goes without say that the consilient unity of science is still far from us.

What holds for science in general, holds also for the study of uncertainty. The ancient Greeks invented the axiomatic method and used it in the study of mathematics. Some medieval thinkers explored the mathematics of uncertainty, but it wasn't until around 1600 that serious thought was directed to the systematic study of uncertainty, and statistics as a separate and mature discipline emerged only in the 19th century. The 20th century saw a florescence of uncertainty models. Lukaczewicz discovered 3-valued logic in 1917, and in 1965 Zadeh introduced his work on fuzzy logic. In between, Wald formulated a modern version of min-max in 1945. A plethora of other theories, including P-boxes, lower previsions, Dempster-Shafer theory, generalized information theory and info-gap theory all suggest that the study of uncertainty will continue to grow and diversify.

In short, we have learned many facts and begun to understand our world and its uncertainties, but the disputes and open questions are still rampant and the yet-unformulated questions are endless. This means that innovations, discoveries, inventions, surprises, errors, and misunderstandings are to be expected in the study or management of uncertainty. We are just getting started. 




the

Habit: A Response to the Unknown


David Hume explained that we believe by habit that logs will burn, stones will fall, and endless other past patterns will recur. No experiment can prove the future recurrence of past events. An experiment belongs to the future only until it is implemented; once completed, it becomes part of the past. In order for past experiments to prove something about the future, we must assume that the past will recur in the future. That's as circular as it gets.

But without the habit of believing that past patterns will recur, we would be incapacitated and ineffectual (and probably reduced to moping and sobbing). Who would dare climb stairs or fly planes or eat bread and drink wine, without the belief that, like in the past, the stairs will bear our weight, the wings will carry us aloft, and the bread and wine will nourish our body and soul. Without such habits we would become a jittering jelly of indecision in the face of the unknown.

But you can't just pull a habit out of a hat. We spend great effort instilling good habits in our children: to brush their teeth, tell the truth, and not pick on their little sister even if she deserves it.

As we get older, and I mean really older, we begin to worry that our habits become frozen, stodgy, closed-minded and constraining. Younger folks smile at our rigid ways, and try to loosen us up to the new wonders of the world: technological, culinary or musical. Changing your habits, or staying young when you aren't, isn't always easy. Without habits we're lost in an unknowable world.

And yet, openness to new ideas, tastes, sounds and other experiences of many sorts can itself be a habit, and perhaps a good one. It is the habit of testing the unknown, of acknowledging the great gap between what we do know and what we can know. That gap is an invitation to growth and awe, as well as to fear and danger.

The habit of openness to change is not a contradiction. It is simply a recognition that habits are a response to the unknown. Not everything changes all the time (or so we're in the habit of thinking), and some things are new under the sun (as newspapers and Nobel prize committees periodically remind us).

Habits, including the habit of open-mindedness, are a good thing precisely because we can never know for sure how good or bad they really are.




the

MOOCs and the Unknown


MOOCs - Massive Open Online Courses - have fed hundreds of thousands of knowledge-hungry people around the globe. Stanford University's MOOCs program has taught open online courses to tens of thousands students per course, and has 2.5 million enrollees from nearly every country in the world. The students hear a lecturer, and also interact with each other in digital social networks that facilitate their mastery of the material and their integration into global communities of the knowledgable. The internet, and its MOOC realizations, extend the democratization of knowledge to a scale unimagined by early pioneers of workers' study groups or public universities. MOOCs open the market of ideas and knowledge to everyone, from the preacher of esoteric spirituality to the teacher of esoteric computer languages. It's all there, all you need is a browser.

The internet is a facilitating technology, like the invention of writing or the printing press, and its impacts may be as revolutionary. MOOCs are here to stay, like the sun to govern by day and the moon by night, and we can see that it is good. But it also has limitations, and these we must begin to understand.

Education depends on the creation and transfer of knowledge. Insight, invention, and discovery underlay the creation of knowledge, and they must precede the transfer of knowledge. MOOCs enable learners to sit at the feet of the world's greatest creators of knowledge.

But the distinction between creation and transfer of knowledge is necessarily blurred in the process of education itself. Deep and meaningful education is the creation of knowledge in the mind of the learner. Education is not the transfer of digital bits between electronic storage devices. Education is the creation or discovery by the learner of thoughts that previously did not exist in his mind. One can transfer facts per se, but if this is done without creative insight by the learner it is no more than Huck Finn's learning "the multiplication table up to six times seven is thirty-five".

Invention, discovery and creation occur in the realm of the unknown; we cannot know what will be created until it appears. Two central unknowns dominate the process of education, one in the teacher's mind and one in the student's.

The teacher cannot know what questions the student will ask. Past experience is a guide, but the universe of possible questions is unbounded, and the better the student, the more unpredictable the questions. The teacher should respond to these questions because they are the fruitful meristem of the student's growing understanding. The student's questions are the teacher's guide into the student's mind. Without them the teacher can only guess how to reach the learner. The most effective teacher will personalize his interaction with the learner by responding to the student's questions.

The student cannot know the substance of what the teacher will teach; that's precisely why the student has come to the teacher. In extreme cases - of really deep and mind-altering learning - the student will not even understand the teacher's words until they are repeated again and again in new and different ways. The meanings of words come from context. A word means one thing and not another because we use that word in this way and not that. The student gropes to find out how the teacher uses words, concepts and tools of thought. The most effective learning occurs when the student can connect the new meanings to his existing mental contexts. The student cannot always know what contexts will be evoked by his learning.

As an interim summary, learning can take place only if there is a gap of knowledge between teacher and student. This knowledge gap induces uncertainties on both sides. Effective teaching and learning occur by personalized interaction to dispel these uncertainties, to fill the gap, and to complete the transfer of knowledge.

We can now appreciate the most serious pedagogic limitation of MOOCs as a tool for education. Mass education is democratic, and MOOCs are far more democratic than any previous mode. This democracy creates a basic tension. The more democratic a mode of communication, the less personalized it is because of its massiveness. The less personalized a communication, the less effective it is pedagogically. The gap of the unknown that separates teacher and learner is greatest in massively democratic education.

Socrates inveighed against the writing of books. They are too impersonal and immutable. They offer too little room for Socratic mid-wifery of wisdom, in which knowledge comes from dialog. Socrates wanted to touch his students' souls, and because each soul is unique, no book can bridge the gap. Books can at best jog the memory of learners who have already been enlightened. Socrates would probably not have liked MOOCs either, and for similar reasons.

Nonetheless, Socrates might have preferred MOOCs over books because the mode of communication is different. Books approach the learner through writing, and induce him to write in response. In contrast, MOOCs approach the learner through speech, and induce him to speak in response. Speech, for Socrates, is personal and interactive; speech is the road to the soul. Spoken bilateral interaction cannot occur between a teacher and 20 thousand online learners spread over time and space. That format is the ultimate insult to Socratic learning. On the other hand, the networking that can accompany a MOOC may possibly facilitate the internalization of the teacher's message even more effectively than a one-on-one tutorial. Fast and multi-personal, online chats and other networking can help the learners to rapidly find their own mental contexts for assimilating and modifying the teacher's message.

Many people have complained that the internet undermines the permanence of the written word. No document is final if it's on the web. Socrates might have approved, and this might be the greatest strength of the MOOC: no course ever ends and no lecture is really final. If MOOCs really are democratic then they cannot be controlled. The discovery of knowledge, like the stars in their orbits, is forever on-going, with occasional supernovas that brighten the heavens. The creation of knowledge will never end because the unknown is limitless. If MOOCs facilitate this creation, then they are good. 




the

Mathematical Metaphors


Theories in all areas of science tell us something about the world. They are images, or models, or representations of reality. Theories tell stories about the world and are often associated with stories about their discovery. Like the story (probably apocryphal) that Newton invented the theory of gravity after an apple fell on his head. Or the story (probably true) that Kekule discovered the cyclical structure of benzene after day-dreaming of a snake seizing its tail. Theories are metaphors that explain reality.

A theory is scientific if it is precise, quantitative, and amenable to being tested. A scientific theory is mathematical. Scientific theories are mathematical metaphors.

A metaphor uses a word or phrase to define or extend or focus the meaning of another word or phrase. For example, "The river of time" is a metaphor. We all know that rivers flow inevitably from high to low ground. The metaphor focuses the concept of time on its inevitable uni-directionality. Metaphors make sense because we understand what they mean. We all know that rivers are wet, but we understand that the metaphor does not mean to imply that time drips, because we understand the words and their context. But on the other hand, a metaphor - in the hands of a creative and imaginative person - might mean something unexpected, and we need to think carefully about what the metaphor does, or might, mean. Mathematical metaphors - scientific models - also focus attention in one direction rather than another, which gives them explanatory and predictive power. Mathematical metaphors can also be interpreted in different and surprising ways.

Some mathematical models are very accurate metaphors. For instance, when Galileo dropped a heavy object from the leaning tower of Pisa, the distance it fell increased in proportion to the square of the elapsed time. Mathematical equations sometimes represent reality quite accurately, but we understand the representation only when the meanings of the mathematical terms are given in words. The meaning of the equation tells us what aspect of reality the model focuses on. Many things happened when Galileo released the object - it rotated, air swirled, friction developed - while the equation focuses on one particular aspect: distance versus time. Likewise, the quadratic equation that relates distance to time can also be used to relate energy to the speed of light, or to relate population growth rate to population size. In Galileo's case the metaphor relates to freely falling objects.

Other models are only approximations. For example, a particular theory describes the build up of mechanical stress around a crack, causing damage in the material. While cracks often have rough or ragged shapes, this important and useful theory assumes the crack is smooth and elliptical. This mathematical metaphor is useful because it focuses the analysis on the radius of curvature of the crack that is critical in determining the concentration of stress.

Not all scientific models are approximations. Some models measure something. For example, in statistical mechanics, the temperature of a material is proportional to the average kinetic energy of the molecules in the material. The temperature, in degrees centigrade, is a global measure of random molecular motion. In economics, the gross domestic product is a measure of the degree of economic activity in the country.

Other models are not approximations or measures of anything, but rather graphical portrayals of a relationship. Consider, for example, the competition among three restaurants: Joe's Easy Diner, McDonald's, and Maxim's de Paris. All three restaurants compete with each other: if you're hungry, you've got to choose. Joe's and McDonald's are close competitors because they both specialize in hamburgers but also have other dishes. They both compete with Maxim's, a really swank and expensive boutique restaurant, but the competition is more remote. To model the competition we might draw a line representing "competition", with each restaurant as a dot on the line. Joe's and McDonald's are close together and far from Maxim's. This line is a mathematical metaphor, representing the proximity (and hence strength) of competition between the three restaurants. The distances between the dots are precise, but what the metaphor means, in terms of the real-world competition between Joe, McDonald, and Maxim, is not so clear. Why a line rather than a plane to refine the "axes" of competition (price and location for instance)? Or maybe a hill to reflect difficulty of access (Joe's is at one location in South Africa, Maxim's has restaurants in Paris, Peking, Tokyo and Shanghai, and McDonald's is just about everywhere). A metaphor emphasizes some aspects while ignoring others. Different mathematical metaphors of the same phenomenon can support very different interpretations or insights.

The scientist who constructs a mathematical metaphor - a model or theory - chooses to focus on some aspects of the phenomenon rather than others, and chooses to represent those aspects with one image rather than another. Scientific theories are fascinating and extraordinarily useful, but they are, after all, only metaphors.







the

New History of Psychiatry: Melancholy, Madness, Chinese Psychiatry, Psychedelic Therapy, and More

The June 2020 issue of History of Psychiatry is now online. Full details follow below: “Wild melancholy. On the historical plausibility of a black bile theory of blood madness, or hæmatomania,” Jan Verplaetse. Abstract: Nineteenth-century art historian John Addington Symonds coined the term hæmatomania (blood madness) for the extremely bloodthirsty behaviour of a number of … Continue reading New History of Psychiatry: Melancholy, Madness, Chinese Psychiatry, Psychedelic Therapy, and More




the

New Theory & Psychology: Early Critical Theory and Beck’s Cognitive Theory

Two articles in the most recent issue of Theory & Psychology may interest AHP readers. Full details below. “How lost and accomplished revolutions shaped psychology: Early Critical Theory (Frankfurt School), Wilhelm Reich, and Vygotsky,” by Gordana Jovanovi?. Abstract: On the occasion of recent centenaries of revolutions in Europe (1917, 1918–19), this article examines, within a … Continue reading New Theory & Psychology: Early Critical Theory and Beck’s Cognitive Theory




the

CfP: Shaping the ‘Socialist Self’? The Role of Psy-Sciences in Communist States of the Eastern Bloc (1948–1989)

CALL FOR PAPERSINTERNATIONAL WORKSHOP Shaping the ‘Socialist Self’? The Role of Psy-Sciences in Communist States of the Eastern Bloc (1948–1989) Date: 6 November 2020 Venue: Prague, Czech Republic Deadline for applications: 30 June 2020 Organizing institutions: CEFRES (French Research Center in Humanities and Social Sciences in Prague) Institute of Contemporary History of the Czech Academy of Sciences Collegium Carolinum … Continue reading CfP: Shaping the ‘Socialist Self’? The Role of Psy-Sciences in Communist States of the Eastern Bloc (1948–1989)




the

May HoP, including a Special Section: Who Was Little Albert? The Historical Controversy

Photographs of John Watson (left) and Rosalie Rayner (right) via Ben Harris. The May 2020 issue of History of Psychology is now online. The issue includes a special section on “Who Was Little Albert? The Historical Controversy.” Full details follow below. Special Section: Who Was Little Albert? The Historical Controversy“Journals, referees, and gatekeepers in the … Continue reading May HoP, including a Special Section: Who Was Little Albert? The Historical Controversy




the

Forthcoming in HHS: Homosexual Aversion Therapy, Comte on Organism-Environment Relationships

Two forthcoming pieces in History of the Human Sciences may be of interest to AHP readers. Full details below. “Cold War Pavlov: Homosexual aversion therapy in the 1960s,” by Kate Davison. Abstract: Homosexual aversion therapy enjoyed two brief but intense periods of clinical experimentation: between 1950 and 1962 in Czechoslovakia, and between 1962 and 1975 … Continue reading Forthcoming in HHS: Homosexual Aversion Therapy, Comte on Organism-Environment Relationships




the

The Breakfast That Boosts Weight Loss By 65%

The food lowers cravings for high-sugar and high-fat foods and suppresses appetite during the day.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

The Music That Boosts Learning By 18% (M)

Three classical pieces that boost memory retention.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

The Popular Foods That Lower Your IQ

Two-thirds of children report eating this food weekly.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

Stress Has Risen In This Age Group More Than Any Other (M)

Even before the pandemic, this age group were reporting record levels of levels.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

The Best Material For A Homemade COVID-19 Mask

The best type of fabric for a breathable but effective COVID-19 mask.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

Cuddling: The Amazing Effect On Your Brain

For the study, 10 couples spent 45 minutes inside a brain scanner together in close physical contact.

Support PsyBlog for just $5 per month. Enables access to articles marked (M) and removes ads.

→ Explore PsyBlog's ebooks, all written by Dr Jeremy Dean:




the

How Technology Is Improving Safety On the Roads and Reducing Driving Anxiety

Technology has changed a number of aspects of our everyday lives and has led to increased efficiency. But when it comes to driving, has it helped or hindered the process? In this article, we will be looking into some of the ways that technology has improved safety on our roads in the last 10 years. […]




the

4 Ways Therapists Assist People with Mental Health Issues

One of the primary reasons people seek therapy is to get help with mental health issues. Some of the more common mental disorders affecting individuals today include depression, anxiety, post traumatic stress (PTSD), phobias, addiction, and attention deficit hyperactivity (ADHD). Depending on the type and intensity of your issue, your therapist may adjust his treatment […]




the

Online Therapy: A Powerful Tool in the Fight Against Covid 19

The coronavirus pandemic is affecting billions of people around the world today. Coronavirus, now called covid 19, is a type of virus that is usually found in animals and is rarely transmitted to humans. According to reports from the World Health Organization, covid 19 likely originated from a seafood and meat market in Wuhan, China, […]




the

Examining the Pros and Cons of Phone Therapy

Telephone therapy has taken on greater significance in the mental health industry in wake of the covid-19 pandemic. While some individuals may have avoided telephone therapy in the past, the temporary closure of mental health offices and the necessity of social distancing have resulted in an increasing number of people asking for more information on […]




the

Does Insurance Cover Therapy Costs in the United States?

Although mental health is just as important as physical health in promoting overall well-being, many insurance companies in the past did not agree with that viewpoint. This is shown by the fact that, for many years,  a large percentage of insurers provided better insurance coverage for physical issues than mental health issues. However, in 2008, […]




the

How Phone Counseling May Help Save Lives During the Covid-19 Lockdown

With the covid-19 pandemic now affecting virtually every country on earth, it is understandable that much of the world’s focus has been on protecting people’s physical health. Hand washing and social distancing is important in the fight against the coronavirus. However, it is important to remember that mental health issues may lead to loss of […]




the

Charles Barkley believes in the hot hand fallacy – when it comes to poker, anyway

NBA legend and recreational gambler Charles Barkley is presented with the following hypothetical on ESPN radio: You are winning big at the poker table when a beautiful woman sits down next to you. “Do you stay with the hands or do you leave?” Barkley: “Bro, gambling is so fickle, I love to gamble, when you [...]




the

The landfill nudge shows up at a Whole Foods in Lake Forest, Illinois

Hat tip: Brad Bennett