y

Mitigating the Effects of Trauma among Young Children of Immigrants and Refugees: The Role of Early Childhood Programs

The first years of a child’s life are a time of immense growth, and exposure to trauma—if left unaddressed—can have significant, lifelong effects. This issue brief examines how young children of refugees and other immigrants may be affected by trauma, and what early childhood education and care programs, health-care providers, and others can do to mitigate its adverse effects.




y

Addressing Trauma in Young Children in Immigrant and Refugee Families through Early Childhood Programs

During this webinar, speakers provide an overview of an MPI policy brief that seeks to raise awareness of the intersection of trauma and early childhood development, and how U.S. early childhood programs could more effectively address this trauma in young children in refugee and immigrant households. The participants discuss efforts to integrate trauma-informed approaches into early childhood systems and how home visiting services can effectively address trauma and mental health through a two-generation approach.




y

Immigrant Families and Child Welfare Systems: Emerging Needs and Promising Policies

With the children of immigrants a growing share of all U.S. children, and federal immigration enforcement and other policies undergoing significant change, some state and local child welfare agencies are developing new ways to improve how they work with immigrant families. This report examines key cultural, linguistic, and legal challenges, and how agencies are adjusting staffing, training, placement, and other policies to tackle them.




y

Millions Will Feel Chilling Effects of U.S. Public-Charge Rule That Is Also Likely to Reshape Legal Immigration

The public-charge rule issued by the Trump administration in August 2019 will have profound effects on future immigration and on use of public benefits by millions of legal noncitizens and their U.S.-citizen family members. Complex standards for determining when an immigrant is likely to become a public charge could cause a significant share of the nearly 23 million noncitizens and U.S. citizens in benefits-using immigrant families to disenroll, as this commentary explains.




y

Health Insurance Test for Green-Card Applicants Could Sharply Cut Future U.S. Legal Immigration

A new Trump administration action requiring intending immigrants to prove they can purchase eligible health insurance within 30 days of arrival has the potential to block fully 65 percent of those who apply for a green card from abroad, MPI estimates.




y

Health Insurance Coverage of Immigrants and Latinos in the Kansas City Metro Area

Latinos and immigrants are at least twice as likely to lack health insurance coverage as the overall population in the Kansas City metropolitan area. This gap that has significant implications for the region, as Latinos and immigrants will form an ever-growing share of the area’s labor force and tax base amid anticipated declines in the native-born, non-Latino population.




y

As U.S. Health-Care System Buckles under Pandemic, Immigrant & Refugee Professionals Could Represent a Critical Resource

In a time of critical shortages of U.S. health-care workers during the COVID-19 pandemic, retired doctors are being called back to work and medical students are graduating on a fast track. There is another important pool that could be tapped: Immigrants and refugees who have college degrees in health fields but are working in low-skilled jobs or out of work. MPI estimates 263,000 immigrants are experiencing skill underutilization and could be a valuable resource.




y

The Digital Transformation Playbook: Rethink Your Business for the Digital Age

Every business begun before the Internet now faces the same challenge: How to transform to compete in a digital economy? This is the leadership challenge examined by BRITE founder and Columbia Business School faculty member David Rogers in his newest book, The Digital Transformation Playbook (April 5, 2016; Columbia Business School Publishing). In the book, […]




y

Can a Company be pro-regulation and pro-commerce? Gregg Renfrew from Beautycounter thinks so

It’s the middle of an election year and, according to the Pew Research Center, the country hasn’t been this polarized since the Civil War. In such a climate, it would seem to be an oxymoron for a company to push for both financial growth and tighter regulations. Gregg Renfrew, CEO & Founder of Beautycounter, wouldn’t […]




y

Reflections on Business, Leadership, and Branding: Shelly Lazarus ’70

Much has changed in the world of advertising from the picture painted by Mad Men. Shelly Lazarus ’70, Chairman Emeritus, Ogilvy & Mather, was one of the women helping pioneer these changes. Making the journey from ‘the only woman in the room’ to CEO and Chairman of Ogilvy gives Lazarus a lot to reflect on […]




y

Thinking with AND: Insights from KIND’s story

“I’m a confused Mexican Jew.” So says Daniel Lubetzky, Founder and CEO of KIND Snack, in his very personal interview with Columbia faculty member David Rogers at BRITE ’16. Their discussion touched on the many ideas behind KIND Snacks, from the beginnings of the company, to the strategic thinking that forces Lubetzky to stay away […]




y

Algorhythms for Marketing Transformation

We all understand that digital media, data, and analytics are driving transformations in society and business. Most marketers are now armed with case studies of what can be done differently, but many are still challenged with how to truly develop new ideas and execute new strategies to grow their business. Mitch Joel, President of Mirum […]




y

Evaluating the Effect of U-500 Insulin Therapy on Glycemic Control in Veterans With Type 2 Diabetes

Joseph A. Granata
Jan 1, 2015; 33:14-19
Feature Articles




y

The Diabetes Attitudes, Wishes and Needs Second Study

Martha M. Funnell
Jan 1, 2015; 33:32-36
Translating Research to Practice




y

Case Study: Potential Pitfalls of Using Hemoglobin A1c as the Sole Measure of Glycemic Control

Huy A. Tran
Jul 1, 2004; 22:141-143
Case Studies




y

Effects of Glycemic Control on Diabetes Complications and on the Prevention of Diabetes

Jay S. Skyler
Oct 1, 2004; 22:162-166
Feature Articles




y

Medical Nutrition Therapy: A Key to Diabetes Management and Prevention

Sara F. Morris
Dec 1, 2010; 28:12-18
Feature Articles




y

Diabetes Self-management Education and Support in Type 2 Diabetes: A Joint Position Statement of the American Diabetes Association, the American Association of Diabetes Educators, and the Academy of Nutrition and Dietetics

Margaret A. Powers
Apr 1, 2016; 34:70-80
Position Statements




y

Integration of Clinical Psychology in the Comprehensive Diabetes Care Team

Steven B. Leichter
Jul 1, 2004; 22:129-131
The Business of Diabetes




y

Clarifying the Role of Insulin in Type 2 Diabetes Management

John R. White
Jan 1, 2003; 21:
Feature Articles




y

SGLT-2 Inhibitors: A New Mechanism for Glycemic Control

Edward C. Chao
Jan 1, 2014; 32:4-11
Feature Articles




y

PROactive: A Sad Tale of Inappropriate Analysis and Unjustified Interpretation

Jay S. Skyler
Apr 1, 2006; 24:63-65
Commentary




y

Persistence of Continuous Glucose Monitoring Use in a Community Setting 1 Year After Purchase

James Chamberlain
Jul 1, 2013; 31:106-109
Feature Articles




y

Interdisciplinary Team Care for Diabetic Patients by Primary Care Physicians, Advanced Practice Nurses, and Clinical Pharmacists

David Willens
Apr 1, 2011; 29:60-68
Feature Articles




y

Insulin Strategies for Primary Care Providers

Karen L. Herbst
Jan 1, 2002; 20:
Feature Articles




y

Diabetes Management Issues for Patients With Chronic Kidney Disease

Kerri L. Cavanaugh
Jul 1, 2007; 25:90-97
Feature Articles




y

Health Care Transition in Adolescents and Young Adults With Diabetes

Michael E. Bowen
Jun 1, 2010; 28:99-106
Feature Articles




y

Stigma in People With Type 1 or Type 2 Diabetes

Nancy F. Liu
Jan 1, 2017; 35:27-34
Feature Articles




y

Management of Diabetic Peripheral Neuropathy

Andrew J.M. Boulton
Jan 1, 2005; 23:9-15
Feature Articles




y

Application of Adult-Learning Principles to Patient Instructions: A Usability Study for an Exenatide Once-Weekly Injection Device

Gayle Lorenzi
Sep 1, 2010; 28:157-162
Bridges to Excellence




y

Helping Patients Make and Sustain Healthy Changes: A Brief Introduction to Motivational Interviewing in Clinical Diabetes Care

Michele Heisler
Oct 1, 2008; 26:161-165
Practical Pointers




y

Hospital Management of Hyperglycemia

Kristen B. Campbell
Apr 1, 2004; 22:81-88
Practical Pointers




y

Diabetes Self-Management in a Community Health Center: Improving Health Behaviors and Clinical Outcomes for Underserved Patients

Daren Anderson
Jan 1, 2008; 26:22-27
Bridges to Excellence




y

Cardiac Manifestations of Congenital Generalized Lipodystrophy

Vani P. Sanon
Oct 1, 2016; 34:181-186
Feature Articles




y

Hypoglycemia in Type 1 and Type 2 Diabetes: Physiology, Pathophysiology, and Management

Vanessa J. Briscoe
Jul 1, 2006; 24:115-121
Feature Articles




y

Standards of Medical Care in Diabetes--2019 Abridged for Primary Care Providers

American Diabetes Association
Jan 1, 2019; 37:11-34
Position Statements




y

Amylin Replacement With Pramlintide in Type 1 and Type 2 Diabetes: A Physiological Approach to Overcome Barriers With Insulin Therapy

John B. Buse
Jul 1, 2002; 20:
Feature Articles




y

The Disparate Impact of Diabetes on Racial/Ethnic Minority Populations

Edward A. Chow
Jul 1, 2012; 30:130-133
Diabetes Advocacy




y

Standards of Medical Care in Diabetes--2016 Abridged for Primary Care Providers

American Diabetes Association
Jan 1, 2016; 34:3-21
Position Statements




y

What's So Tough About Taking Insulin? Addressing the Problem of Psychological Insulin Resistance in Type 2 Diabetes

William H. Polonsky
Jul 1, 2004; 22:147-150
Practical Pointers




y

A Real-World Approach to Insulin Therapy in Primary Care Practice

Irl B. Hirsch
Apr 1, 2005; 23:78-86
Practical Pointers




y

Standards of Medical Care in Diabetes--2018 Abridged for Primary Care Providers

American Diabetes Association
Jan 1, 2018; 36:14-37
Position Statements




y

Standards of Medical Care in Diabetes--2017 Abridged for Primary Care Providers

American Diabetes Association
Jan 1, 2017; 35:5-26
Position Statements




y

Standards of Medical Care in Diabetes--2015 Abridged for Primary Care Providers

American Diabetes Association
Apr 1, 2015; 33:97-111
Position Statements




y

The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy

By Scott T. Allison Devs is the ideal TV mini-series for people to sink their teeth into, for many reasons: (1) It’s both science and science-fiction; (2) it’s brilliant mix of psychology, philosophy, religion, and technology; (3) it tantalizes us with the mysteries of love, life, death, time, and space; and (4) it features a … Continue reading The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy



  • Commentary and Analysis

y

No-Failure Design and Disaster Recovery: Lessons from Fukushima

One of the striking aspects of the early stages of the nuclear accident at Fukushima-Daiichi last March was the nearly total absence of disaster recovery capability. For instance, while Japan is a super-power of robotic technology, the nuclear authorities had to import robots from France for probing the damaged nuclear plants. Fukushima can teach us an important lesson about technology.

The failure of critical technologies can be disastrous. The crash of a civilian airliner can cause hundreds of deaths. The meltdown of a nuclear reactor can release highly toxic isotopes. Failure of flood protection systems can result in vast death and damage. Society therefore insists that critical technologies be designed, operated and maintained to extremely high levels of reliability. We benefit from technology, but we also insist that the designers and operators "do their best" to protect us from their dangers.

Industries and government agencies who provide critical technologies almost invariably act in good faith for a range of reasons. Morality dictates responsible behavior, liability legislation establishes sanctions for irresponsible behavior, and economic or political self-interest makes continuous safe operation desirable.

The language of performance-optimization  not only doing our best, but also achieving the best  may tend to undermine the successful management of technological danger. A probability of severe failure of one in a million per device per year is exceedingly  and very reassuringly  small. When we honestly believe that we have designed and implemented a technology to have vanishingly small probability of catastrophe, we can honestly ignore the need for disaster recovery.

Or can we?

Let's contrast this with an ethos that is consistent with a thorough awareness of the potential for adverse surprise. We now acknowledge that our predictions are uncertain, perhaps highly uncertain on some specific points. We attempt to achieve very demanding outcomes  for instance vanishingly small probabilities of catastrophe  but we recognize that our ability to reliably calculate such small probabilities is compromised by the deficiency of our knowledge and understanding. We robustify ourselves against those deficiencies by choosing a design which would be acceptable over a wide range of deviations from our current best understanding. (This is called "robust-satisficing".) Not only does "vanishingly small probability of failure" still entail the possibility of failure, but our predictions of that probability may err.

Acknowledging the need for disaster recovery capability (DRC) is awkward and uncomfortable for designers and advocates of a technology. We would much rather believe that DRC is not needed, that we have in fact made catastrophe negligible. But let's not conflate good-faith attempts to deal with complex uncertainties, with guaranteed outcomes based on full knowledge. Our best models are in part wrong, so we robustify against the designer's bounded rationality. But robustness cannot guarantee success. The design and implementation of DRC is a necessary part of the design of any critical technology, and is consistent with the strategy of robust satisficing.

One final point: moral hazard and its dilemma. The design of any critical technology entails two distinct and essential elements: failure prevention and disaster recovery. What economists call a `moral hazard' exists since the failure prevention team might rely on the disaster-recovery team, and vice versa. Each team might, at least implicitly, depend on the capabilities of the other team, and thereby relinquish some of its own responsibility. Institutional provisions are needed to manage this conflict.

The alleviation of this moral hazard entails a dilemma. Considerations of failure prevention and disaster recovery must be combined in the design process. The design teams must be aware of each other, and even collaborate, because a single coherent system must emerge. But we don't want either team to relinquish any responsibility. On the one hand we want the failure prevention team to work as though there is no disaster recovery, and the disaster recovery team should presume that failures will occur. On the other hand, we want these teams to collaborate on the design.

This moral hazard and its dilemma do not obviate the need for both elements of the design. Fukushima has taught us an important lesson by highlighting the special challenge of high-risk critical technologies: design so failure cannot occur, and prepare to respond to the unanticipated.




y

Baseball and Linguistic Uncertainty

In my youth I played an inordinate amount of baseball, collected baseball cards, and idolized baseball players. I've outgrown all that but when I'm in the States during baseball season I do enjoy watching a few innings on the TV.

So I was watching a baseball game recently and the commentator was talking about the art of pitching. Throwing a baseball, he said, is like shooting a shotgun. You get a spray. As a pitcher, you have to know your spray. You learn to control it, but you know that it is there. The ball won't always go where you want it. And furthermore, where you want the ball depends on the batter's style and strategy, which vary from pitch to pitch for every batter.

That's baseball talk, but it stuck in my mind. Baseball pitchers must manage uncertainty! And it is not enough to reduce it and hope for the best. Suppose you want to throw a strike. It's not a good strategy to aim directly at, say, the lower outside corner of the strike zone, because of the spray of the ball's path and because the batter's stance can shift. Especially if the spray is skewed down and out, you'll want to move up and in a bit.

This is all very similar to the ambiguity of human speech when we pitch words at each other. Words don't have precise meanings; meanings spread out like the pitcher's spray. If we want to communicate precisely we need to be aware of this uncertainty, and manage it, taking account of the listener's propensities.

Take the word "liberal" as it is used in political discussion.

For many decades, "liberals" have tended to support high taxes to provide generous welfare, public medical insurance, and low-cost housing. They advocate liberal (meaning magnanimous or abundant) government involvement for the citizens' benefit.

A "liberal" might also be someone who is open-minded and tolerant, who is not strict in applying rules to other people, or even to him or herself. Such a person might be called "liberal" (meaning advocating individual rights) for opposing extensive government involvement in private decisions. For instance, liberals (in this second sense) might oppose high taxes since they reduce individuals' ability to make independent choices. As another example, John Stuart Mill opposed laws which restricted the rights of women to work (at night, for instance), even though these laws were intended to promote the welfare of women. Women, insisted Mill, are intelligent adults and can judge for themselves what is good for them.

Returning to the first meaning of "liberal" mentioned above, people of that strain may support restrictions of trade to countries which ignore the health and safety of workers. The other type of "liberal" might tend to support unrestricted trade.

Sending out words and pitching baseballs are both like shooting a shotgun: meanings (and baseballs) spray out. You must know what meaning you wish to convey, and what other meanings the word can have. The choice of the word, and the crafting of its context, must manage the uncertainty of where the word will land in the listener's mind.


Let's go back to baseball again.

If there were no uncertainty in the pitcher's pitch and the batter's swing, then baseball would be a dreadfully boring game. If the batter knows exactly where and when the ball will arrive, and can completely control the bat, then every swing will be a homer. Or conversely, if the pitcher always knows exactly how the batter will swing, and if each throw is perfectly controlled, then every batter will strike out. But which is it? Whose certainty dominates? The batter's or the pitcher's? It can't be both. There is some deep philosophical problem here. Clearly there cannot be complete certainty in a world which has some element of free will, or surprise, or discovery. This is not just a tautology, a necessary result of what we mean by "uncertainty" and "surprise". It is an implication of limited human knowledge. Uncertainty - which makes baseball and life interesting - is inevitable in the human world.

How does this carry over to human speech?

It is said of the Wright brothers that they thought so synergistically that one brother could finish an idea or sentence begun by the other. If there is no uncertainty in what I am going to say, then you will be bored with my conversation, or at least, you won't learn anything from me. It is because you don't know what I mean by, for instance, "robustness", that my speech on this topic is enlightening (and maybe interesting). And it is because you disagree with me about what robustness means (and you tell me so), that I can perhaps extend my own understanding.

So, uncertainty is inevitable in a world that is rich enough to have surprise or free will. Furthermore, this uncertainty leads to a process - through speech - of discovery and new understanding. Uncertainty, and the use of language, leads to discovery.

Isn't baseball an interesting game?




y

Squirrels and Stock Brokers, Or: Innovation Dilemmas, Robustness and Probability

Decisions are made in order to achieve desirable outcomes. An innovation dilemma arises when a seemingly more attractive option is also more uncertain than other options. In this essay we explore the relation between the innovation dilemma and the robustness of a decision, and the relation between robustness and probability. A decision is robust to uncertainty if it achieves required outcomes despite adverse surprises. A robust decision may differ from the seemingly best option. Furthermore, robust decisions are not based on knowledge of probabilities, but can still be the most likely to succeed.

Squirrels, Stock-Brokers and Their Dilemmas




Decision problems.
Imagine a squirrel nibbling acorns under an oak tree. They're pretty good acorns, though a bit dry. The good ones have already been taken. Over in the distance is a large stand of fine oaks. The acorns there are probably better. But then, other squirrels can also see those trees, and predators can too. The squirrel doesn't need to get fat, but a critical caloric intake is necessary before moving on to other activities. How long should the squirrel forage at this patch before moving to the more promising patch, if at all?

Imagine a hedge fund manager investing in South African diamonds, Australian Uranium, Norwegian Kroners and Singapore semi-conductors. The returns have been steady and good, but not very exciting. A new hi-tech start-up venture has just turned up. It looks promising, has solid backing, and could be very interesting. The manager doesn't need to earn boundless returns, but it is necessary to earn at least a tad more than the competition (who are also prowling around). How long should the manager hold the current portfolio before changing at least some of its components?

These are decision problems, and like many other examples, they share three traits: critical needs must be met; the current situation may or may not be adequate; other alternatives look much better but are much more uncertain. To change, or not to change? What strategy to use in making a decision? What choice is the best bet? Betting is a surprising concept, as we have seen before; can we bet without knowing probabilities?

Solution strategies.
The decision is easy in either of two extreme situations, and their analysis will reveal general conclusions.

One extreme is that the status quo is clearly insufficient. For the squirrel this means that these crinkled rotten acorns won't fill anybody's belly even if one nibbled here all day long. Survival requires trying the other patch regardless of the fact that there may be many other squirrels already there and predators just waiting to swoop down. Similarly, for the hedge fund manager, if other funds are making fantastic profits, then something has to change or the competition will attract all the business.

The other extreme is that the status quo is just fine, thank you. For the squirrel, just a little more nibbling and these acorns will get us through the night, so why run over to unfamiliar oak trees? For the hedge fund manager, profits are better than those of any credible competitor, so uncertain change is not called for.

From these two extremes we draw an important general conclusion: the right answer depends on what you need. To change, or not to change, depends on what is critical for survival. There is no universal answer, like, "Always try to improve" or "If it's working, don't fix it". This is a very general property of decisions under uncertainty, and we will call it preference reversal. The agent's preference between alternatives depends on what the agent needs in order to "survive".

The decision strategy that we have described is attuned to the needs of the agent. The strategy attempts to satisfy the agent's critical requirements. If the status quo would reliably do that, then stay put; if not, then move. Following the work of Nobel Laureate Herbert Simon, we will call this a satisficing decision strategy: one which satisfies a critical requirement.

"Prediction is always difficult, especially of the future." - Robert Storm Petersen

Now let's consider a different decision strategy that squirrels and hedge fund managers might be tempted to use. The agent has obtained information about the two alternatives by signals from the environment. (The squirrel sees grand verdant oaks in the distance, the fund manager hears of a new start up.) Given this information, a prediction can be made (though the squirrel may make this prediction based on instincts and without being aware of making it). Given the best available information, the agent predicts which alternative would yield the better outcome. Using this prediction, the decision strategy is to choose the alternative whose predicted outcome is best. We will call this decision strategy best-model optimization. Note that this decision strategy yields a single universal answer to the question facing the agent. This strategy uses the best information to find the choice that - if that information is correct - will yield the best outcome. Best-model optimization (usually) gives a single "best" decision, unlike the satisficing strategy that returns different answers depending on the agent's needs.

There is an attractive logic - and even perhaps a moral imperative - to use the best information to make the best choice. One should always try to do one's best. But the catch in the argument for best-model optimization is that the best information may actually be grievously wrong. Those fine oak trees might be swarming with insects who've devoured the acorns. Best-model optimization ignores the agent's central dilemma: stay with the relatively well known but modest alternative, or go for the more promising but more uncertain alternative.

"Tsk, tsk, tsk" says our hedge fund manager. "My information already accounts for the uncertainty. I have used a probabilistic asset pricing model to predict the likelihood that my profits will beat the competition for each of the two alternatives."

Probabilistic asset pricing models are good to have. And the squirrel similarly has evolved instincts that reflect likelihoods. But a best-probabilistic-model optimization is simply one type of best-model optimization, and is subject to the same vulnerability to error. The world is full of surprises. The probability functions that are used are quite likely wrong, especially in predicting the rare events that the manager is most concerned to avoid.

Robustness and Probability

Now we come to the truly amazing part of the story. The satisficing strategy does not use any probabilistic information. Nonetheless, in many situations, the satisficing strategy is actually a better bet (or at least not a worse bet), probabilistically speaking, than any other strategy, including best-probabilistic-model optimization. We have no probabilistic information in these situations, but we can still maximize the probability of success (though we won't know the value of this maximum).

When the satisficing decision strategy is the best bet, this is, in part, because it is more robust to uncertainty than another other strategy. A decision is robust to uncertainty if it achieves required outcomes even if adverse surprises occur. In many important situations (though not invariably), more robustness to uncertainty is equivalent to being more likely to succeed or survive. When this is true we say that robustness is a proxy for probability.

A thorough analysis of the proxy property is rather technical. However, we can understand the gist of the idea by considering a simple special case.

Let's continue with the squirrel and hedge fund examples. Suppose we are completely confident about the future value (in calories or dollars) of not making any change (staying put). In contrast, the future value of moving is apparently better though uncertain. If staying put would satisfy our critical requirement, then we are absolutely certain of survival if we do not change. Staying put is completely robust to surprises so the probability of success equals 1 if we stay put, regardless of what happens with the other option. Likewise, if staying put would not satisfy our critical requirement, then we are absolutely certain of failure if we do not change; the probability of success equals 0 if we stay, and moving cannot be worse. Regardless of what probability distribution describes future outcomes if we move, we can always choose the option whose likelihood of success is greater (or at least not worse). This is because staying put is either sure to succeed or sure to fail, and we know which.

This argument can be extended to the more realistic case where the outcome of staying put is uncertain and the outcome of moving, while seemingly better than staying, is much more uncertain. The agent can know which option is more robust to uncertainty, without having to know probability distributions. This implies, in many situations, that the agent can choose the option that is a better bet for survival.

Wrapping Up

The skillful decision maker not only knows a lot, but is also able to deal with conflicting information. We have discussed the innovation dilemma: When choosing between two alternatives, the seemingly better one is also more uncertain.

Animals, people, organizations and societies have developed mechanisms for dealing with the innovation dilemma. The response hinges on tuning the decision to the agent's needs, and robustifying the choice against uncertainty. This choice may or may not coincide with the putative best choice. But what seems best depends on the available - though uncertain - information.

The commendable tendency to do one's best - and to demand the same of others - can lead to putatively optimal decisions that may be more vulnerable to surprise than other decisions that would have been satisfactory. In contrast, the strategy of robustly satisfying critical needs can be a better bet for survival. Consider the design of critical infrastructure: flood protection, nuclear power, communication networks, and so on. The design of such systems is based on vast knowledge and understanding, but also confronts bewildering uncertainties and endless surprises. We must continue to improve our knowledge and understanding, while also improving our ability to manage the uncertainties resulting from the expanding horizon of our efforts. We must identify the critical goals and seek responses that are immune to surprise. 




y

Can We Replay History?


After the kids' party games and the birthday cake came the action-packed Steve McQueen movie. My friend's parents had rented a movie projector. They hooked up the reel and let it roll. But the high point came later when they ran the movie backwards. Bullets streamed back into guns, blows were retracted and fallen protagonists recoiled into action. The mechanism that pulls the celluloid film forward for normal showing, can pull the film in the reverse direction, rolling it back onto the feeder reel and showing the movie in reverse.

If you chuck a round pebble off a cliff it will fall in a graceful parabolic arch, gradually increasing its speed until it hits the ground. The same pebble, if shot from the point of impact, at the terminating angle and speed, will gracefully and obligingly retrace its path. (I'm ignoring wind and air friction that make things a bit more complicated.)

Deterministic mechanisms, like the movie reel mechanism or the law of gravity, are reversible.

History is different. Peoples' behavior is influenced by what they know. You pack an umbrella on a trip to the UK. Google develops search algorithms not search parties because their knowledge base is information technology not mountain trekking. Knowledge is powerful because it enables rational behavior: matching actions to goals. Knowledge transforms futile fumbling into intelligent behavior.

Knowledge underlies intelligent behavior, but knowledge is continually expanding. We discover new facts and relationships. We discover that things have changed. Therefore tomorrow's knowledge-based behavior will, to some extent, be unpredictable today because tomorrow's discoveries cannot be known today. Human behavior has an inherent element of indeterminism. Intelligent learning behavior cannot be completely predicted.

Personal and collective history does not unfold like a pre-woven rug. Human history is fundamentally different from the trajectory of a pebble tossed from a cliff. History is the process of uncovering the unknown and responding to this new knowledge. The existence of the unknown creates the possibility of free will. The discovery of new knowledge introduces indeterminism and irreversibility into history, as explained by the philosophers G.L.S. Shackle and Karl Popper.

Nonetheless history is not erratic because each increment of new knowledge adds to the store of what was learned before. Memory is not perfect, either of individuals or groups, but it is powerful. History happens in historical context. For instance, one cannot understand the recent revolutions and upheavals in the Arab world from the perspective of 18th century European revolutions; the historical backgrounds are too different, and the outcomes in the Middle East will be different as well. Innovation, even revolution, is spurred by new knowledge laid over the old. A female municipal official slapped a Tunisian street vendor, Mohamed Bouazizi. That slap crystalized Mr Bouazizi's knowledge of his helpless social impotence and lit the match with which he immolated himself and initiated conflagrations around the Mideast. New knowledge acts like thruster engines on the inertial body of memory. What is emerging in the Mideast is Middle Eastern, not European. What is emerging is the result of new knowledge: of the power of networking, of the mortality of dictators, of the limits of coercion, of the power of new knowledge itself and the possibilities embedded in tomorrow's unknowns.

Mistakes are made, even with the best intentions and the best possible knowledge. Even if analysts knew and understood all the actions of all actors on the stage of history, they still cannot know what those people will learn tomorrow and how that new knowledge will alter their behavior. Mistakes are made because history does not unwind like a celluloid reel.

That's not to say that analysts are never ignorant, negligent, stupid or malicious. It's to say that all actions are, in a sense, mistakes. Or, the biggest mistake of all is to think that we can know the full import of our actions. We cannot, because actions are tossed, like pebbles, into the dark pit of unknown possible futures. One cannot know all possible echoes, or whether some echo might be glass-shatteringly cataclysmic.

Mistakes can sometimes be corrected, but never undone. History cannot be run backwards, and you never get a second chance. Conversely, every instant is a new opportunity because the future is always uncertain. Uncertainty is the freedom to err, and the opportunity to create and discover. 




y

Picking a Theory is Like Building a Boat at Sea


"We are like sailors who on the open sea must reconstruct their ship
 but are never able to start afresh from the bottom." 
Otto Neurath's analogy in the words of Willard V. Quine

Engineers, economists, social planners, security strategists, and others base their plans and decisions on theories. They often argue long and hard over which theory to use. Is it ever right to use a theory that we know is empirically wrong, especially if a true (or truer) theory is available? Why is it so difficult to pick a theory?

Let's consider two introductory examples.

You are an engineer designing a robot. You must calculate the forces needed to achieve specified motions of the robotic arms. You can base these calculations on either of two theories. One theory assumes that an object comes to rest unless a force acts upon it. Let's call this axiom A. The other theory assumes that an object moves at constant speed unless a force acts upon it. Let's call this axiom G. Axiom A agrees with observation: Nothing moves continuously without the exertion of force; an object will come to rest unless you keep pushing it. Axiom G contradicts all observation; no experiment illustrates the perpetual motion postulated by the axiom. If all else is the same, which theory should you choose?

Axiom A is Aristotle's law of inertia, which contributed little to the development of mechanical dynamics. Axiom G is Galileo's law of inertia: one of the most fruitful scientific ideas of all time. Why is an undemonstrable assertion - axiom G - a good starting point for a theory?

Consider another example.

You are an economist designing a market-based policy to induce firms to reduce pollution. You will use an economic theory to choose between policies. One theory assumes that firms face pure competition, meaning that no single firm can influence market prices. Another theory provides agent-based game-theoretic characterization of how firms interact (without colluding) by observing and responding to price behavior of other firms and of consumers.

Pure competition is a stylized idealization (like axiom G). Game theory is much more realistic (like axiom A), but may obscure essential patterns in its massive detail. Which theory should you use?

We will not address the question of how to choose a theory upon which to base a decision. We will focus on the question: why is theory selection so difficult? We will discuss four trade offs.

"Thanks to the negation sign, there are as many truths as falsehoods;
we just can't always be sure which are which." Willard V. Quine

The tension between right and right. The number of possible theories is infinite, and sometimes it's hard to separate the wheat from the chaff, as suggested by the quote from Quine. As an example, I have a book called A Modern Guide to Macroeconomics: An Introduction to Competing Schools of Thought by Snowdon, Vane and Wynarczyk. It's a wonderful overview of about a dozen theories developed by leading economic scholars, many of them Nobel Prize Laureates. The theories are all fundamentally different. They use different axioms and concepts and they compete for adoption by economists. These theories have been studied and tested upside down and backwards. However, economic processes are very complex and variable, and the various theories succeed in different ways or in different situations, so the jury is still out. The choice of a theory is no simple matter because many different theories can all seem right in one way or another.

"The fox knows many things, but the hedgehog knows one big thing." Archilochus

The fox-hedgehog tension. This aphorism by Archilochus metaphorically describes two types of theories (and two types of people). Fox-like theories are comprehensive and include all relevant aspects of the problem. Hedgehog-like theories, in contrast, skip the details and focus on essentials. Axiom A is fox-like because the complications of friction are acknowledged from the start. Axiom G is hedgehog-like because inertial resistance to change is acknowledged but the complications of friction are left for later. It is difficult to choose between these types of theories because it is difficult to balance comprehensiveness against essentialism. On the one hand, all relevant aspects of the problem should be considered. On the other hand, don't get bogged down in endless details. This fox-hedgehog tension can be managed by weighing the context, goals and implications of the decision. We won't expand on this idea since we're not considering how to choose a theory; we're only examining why it's a difficult choice. However, the idea of resolving this tension by goal-directed choice motivates the third tension.

"Beyond this island of meanings which in their own nature are true or false
lies the ocean of meanings to which truth and falsity are irrelevant." John Dewey

The truth-meaning tension. Theories are collections of statements like axioms A and G in our first example. Statements carry meaning, and statements can be either true or false. Truth and meaning are different. For instance, "Archilochus was a Japanese belly dancer" has meaning, but is not true. The quote from Dewey expresses the idea that "meaning" is a broader description of statements than "truth". All true statements mean something, but not all meaningful statements are true. That does not imply, however, that all untrue meaningful statements are false, as we will see.

We know the meanings of words and sentences from experience with language and life. A child learns the meanings of words - chair, mom, love, good, bad - by experience. Meanings are learned by pointing - this is a chair - and also by experiencing what it means to love or to be good or bad.

Truth is a different concept. John Dewey wrote that

"truths are but one class of meanings, namely, those in which a claim to verifiability by their consequences is an intrinsic part of their meaning. Beyond this island of meanings which in their own nature are true or false lies the ocean of meanings to which truth and falsity are irrelevant. We do not inquire whether Greek civilization was true or false, but we are immensely concerned to penetrate its meaning."

A true statement, in Dewey's sense, is one that can be confirmed by experience. Many statements are meaningful, even important and useful, but neither true nor false in this experimental sense. Axiom G is an example.

Our quest is to understand why the selection of a theory is difficult. Part of the challenge derives from the tension between meaning and truth. We select a theory for use in formulating and evaluating a plan or decision. The decision has implications: what would it mean to do this rather than that? Hence it is important that the meaning of the theory fit the context of the decision. Indeed, hedgehogs would say that getting the meaning and implication right is the essence of good decision making.

But what if a relevantly meaningful theory is unprovable or even false? Should we use a theory that is meaningful but not verifiable by experience? Should we use a meaningful theory that is even wrong? This quandary is related to the fox-hedgehog tension because the fox's theory is so full of true statements that its meaning may be obscured, while the hedgehog's bare-bones theory has clear relevance to the decision to be made, but may be either false or too idealized to be tested.

Galileo's axiom of inertia is an idealization that is unsupported by experience because friction can never be avoided. Axiom G assumes conditions that cannot be realized so the axiom can never be tested. Likewise, pure competition is an idealization that is rarely if ever encountered in practice. But these theories capture the essence of many situations. In practical terms, what it means to get the robotic arm from here to there is to apply net forces that overcome Galilean inertia. But actually designing a robot requires considering details of dissipative forces like friction. What it means to be a small business is that the market price of your product is beyond your control. But actually running a business requires following and reacting to prices in the store next door.

It is difficult to choose between a relevantly meaningful but unverifiable theory, and a true theory that is perhaps not quite what we mean.

The knowledge-ignorance tension. Recall that we are discussing theories in the service of decision-making by engineers, social scientists and others. A theory should facilitate the use of our knowledge and understanding. However, in some situations our ignorance is vast and our knowledge will grow. Hence a theory should also account for ignorance and be able to accommodate new knowledge.

Let's take an example from theories of decision. The independence axiom is fundamental in various decision theories, for instance in von Neumann-Morgenstern expected utility theory. It says that one's choices should be independent of irrelevant alternatives. Suppose you are offered the dinner choice between chicken and fish, and you choose chicken. The server returns a few minutes later saying that beef is also available. If you switch your choice from chicken to fish you are violating the independence axiom. You prefer beef less than both chicken and fish, so the beef option shouldn't alter the fish-chicken preference.

But let's suppose that when the server returned and mentioned beef, your physician advised you to reduce your cholesterol intake (so your preference for beef is lowest) which prompted your wife to say that you should eat fish at least twice a week because of vitamins in the oil. So you switch from chicken to fish. Beef is not chosen, but new information that resulted from introducing the irrelevant alternative has altered the chicken-fish preference.

One could argue for the independence axiom by saying that it applies only when all relevant information (like considerations of cholesterol and fish oil) are taken into account. On the other hand, one can argue against the independence axiom by saying that new relevant information quite often surfaces unexpectedly. The difficulty is to judge the extent to which ignorance and the emergence of new knowledge should be central in a decision theory.

Wrapping up. Theories express our knowledge and understanding about the unknown and confusing world. Knowledge begets knowledge. We use knowledge and understanding - that is, theory - in choosing a theory. The process is difficult because it's like building a boat on the open sea as Otto Neurath once said.