f Effects of Glycemic Control on Diabetes Complications and on the Prevention of Diabetes By clinical.diabetesjournals.org Published On :: 2004-10-01 Jay S. SkylerOct 1, 2004; 22:162-166Feature Articles Full Article
f Diabetes and Periodontal Infection: Making the Connection By clinical.diabetesjournals.org Published On :: 2005-10-01 Janet H. SoutherlandOct 1, 2005; 23:171-178Feature Articles Full Article
f Diabetes and Back Pain: Markers of Diabetes Disease Progression Are Associated With Chronic Back Pain By clinical.diabetesjournals.org Published On :: 2017-07-01 Lorenzo RinaldoJul 1, 2017; 35:126-131Feature Articles Full Article
f Diabetes Self-management Education and Support in Type 2 Diabetes: A Joint Position Statement of the American Diabetes Association, the American Association of Diabetes Educators, and the Academy of Nutrition and Dietetics By clinical.diabetesjournals.org Published On :: 2016-04-01 Margaret A. PowersApr 1, 2016; 34:70-80Position Statements Full Article
f Integration of Clinical Psychology in the Comprehensive Diabetes Care Team By clinical.diabetesjournals.org Published On :: 2004-07-01 Steven B. LeichterJul 1, 2004; 22:129-131The Business of Diabetes Full Article
f The Death of the "1800-Calorie ADA Diet" By clinical.diabetesjournals.org Published On :: 2002-04-01 Irl B. HirschApr 1, 2002; 20:Editorials Full Article
f The Potential of Group Visits in Diabetes Care By clinical.diabetesjournals.org Published On :: 2008-04-01 Andrew M. DavisApr 1, 2008; 26:58-62Feature Articles Full Article
f Clarifying the Role of Insulin in Type 2 Diabetes Management By clinical.diabetesjournals.org Published On :: 2003-01-01 John R. WhiteJan 1, 2003; 21:Feature Articles Full Article
f Therapeutic Inertia is a Problem for All of Us By clinical.diabetesjournals.org Published On :: 2019-04-01 Stephen BruntonApr 1, 2019; 37:105-106Editorials Full Article
f Diapression: An Integrated Model for Understanding the Experience of Individuals With Co-Occurring Diabetes and Depression By clinical.diabetesjournals.org Published On :: 2011-04-01 Paul CiechanowskiApr 1, 2011; 29:43-49Feature Articles Full Article
f SGLT-2 Inhibitors: A New Mechanism for Glycemic Control By clinical.diabetesjournals.org Published On :: 2014-01-01 Edward C. ChaoJan 1, 2014; 32:4-11Feature Articles Full Article
f Self-Monitoring of Blood Glucose: The Basics By clinical.diabetesjournals.org Published On :: 2002-01-01 Evan M. BenjaminJan 1, 2002; 20:Practical Pointers Full Article
f PROactive: A Sad Tale of Inappropriate Analysis and Unjustified Interpretation By clinical.diabetesjournals.org Published On :: 2006-04-01 Jay S. SkylerApr 1, 2006; 24:63-65Commentary Full Article
f Persistence of Continuous Glucose Monitoring Use in a Community Setting 1 Year After Purchase By clinical.diabetesjournals.org Published On :: 2013-07-01 James ChamberlainJul 1, 2013; 31:106-109Feature Articles Full Article
f Interdisciplinary Team Care for Diabetic Patients by Primary Care Physicians, Advanced Practice Nurses, and Clinical Pharmacists By clinical.diabetesjournals.org Published On :: 2011-04-01 David WillensApr 1, 2011; 29:60-68Feature Articles Full Article
f Insulin Strategies for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2002-01-01 Karen L. HerbstJan 1, 2002; 20:Feature Articles Full Article
f Opportunities and Challenges for Biosimilars: What's on the Horizon in the Global Insulin Market? By clinical.diabetesjournals.org Published On :: 2012-10-01 Lisa S. RotensteinOct 1, 2012; 30:138-150Features Full Article
f Diabetes Management Issues for Patients With Chronic Kidney Disease By clinical.diabetesjournals.org Published On :: 2007-07-01 Kerri L. CavanaughJul 1, 2007; 25:90-97Feature Articles Full Article
f Management of Diabetic Peripheral Neuropathy By clinical.diabetesjournals.org Published On :: 2005-01-01 Andrew J.M. BoultonJan 1, 2005; 23:9-15Feature Articles Full Article
f Building Therapeutic Relationships: Choosing Words That Put People First By clinical.diabetesjournals.org Published On :: 2017-01-01 Jane K. DickinsonJan 1, 2017; 35:51-54Commentary Full Article
f Application of Adult-Learning Principles to Patient Instructions: A Usability Study for an Exenatide Once-Weekly Injection Device By clinical.diabetesjournals.org Published On :: 2010-09-01 Gayle LorenziSep 1, 2010; 28:157-162Bridges to Excellence Full Article
f Engaging Patients in Education for Self-Management in an Accountable Care Environment By clinical.diabetesjournals.org Published On :: 2011-07-01 Christine A. BeebeJul 1, 2011; 29:123-126Practical Pointers Full Article
f Helping Patients Make and Sustain Healthy Changes: A Brief Introduction to Motivational Interviewing in Clinical Diabetes Care By clinical.diabetesjournals.org Published On :: 2008-10-01 Michele HeislerOct 1, 2008; 26:161-165Practical Pointers Full Article
f Hospital Management of Hyperglycemia By clinical.diabetesjournals.org Published On :: 2004-04-01 Kristen B. CampbellApr 1, 2004; 22:81-88Practical Pointers Full Article
f Diabetes Self-Management in a Community Health Center: Improving Health Behaviors and Clinical Outcomes for Underserved Patients By clinical.diabetesjournals.org Published On :: 2008-01-01 Daren AndersonJan 1, 2008; 26:22-27Bridges to Excellence Full Article
f Cardiac Manifestations of Congenital Generalized Lipodystrophy By clinical.diabetesjournals.org Published On :: 2016-10-01 Vani P. SanonOct 1, 2016; 34:181-186Feature Articles Full Article
f Standards of Medical Care in Diabetes--2019 Abridged for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2019-01-01 American Diabetes AssociationJan 1, 2019; 37:11-34Position Statements Full Article
f Perspectives in Gestational Diabetes Mellitus: A Review of Screening, Diagnosis, and Treatment By clinical.diabetesjournals.org Published On :: 2007-04-01 Jennifer M. PerkinsApr 1, 2007; 25:57-62Feature Articles Full Article
f The Disparate Impact of Diabetes on Racial/Ethnic Minority Populations By clinical.diabetesjournals.org Published On :: 2012-07-01 Edward A. ChowJul 1, 2012; 30:130-133Diabetes Advocacy Full Article
f Standards of Medical Care in Diabetes--2016 Abridged for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2016-01-01 American Diabetes AssociationJan 1, 2016; 34:3-21Position Statements Full Article
f What's So Tough About Taking Insulin? Addressing the Problem of Psychological Insulin Resistance in Type 2 Diabetes By clinical.diabetesjournals.org Published On :: 2004-07-01 William H. PolonskyJul 1, 2004; 22:147-150Practical Pointers Full Article
f Standards of Medical Care in Diabetes--2018 Abridged for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2018-01-01 American Diabetes AssociationJan 1, 2018; 36:14-37Position Statements Full Article
f Standards of Medical Care in Diabetes--2017 Abridged for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2017-01-01 American Diabetes AssociationJan 1, 2017; 35:5-26Position Statements Full Article
f Standards of Medical Care in Diabetes--2015 Abridged for Primary Care Providers By clinical.diabetesjournals.org Published On :: 2015-04-01 American Diabetes AssociationApr 1, 2015; 33:97-111Position Statements Full Article
f Empowerment and Self-Management of Diabetes By clinical.diabetesjournals.org Published On :: 2004-07-01 Martha M. FunnellJul 1, 2004; 22:123-127Feature Articles Full Article
f Microvascular and Macrovascular Complications of Diabetes By clinical.diabetesjournals.org Published On :: 2008-04-01 Michael J. FowlerApr 1, 2008; 26:77-82Diabetes Foundation Full Article
f 10 Examples of Heroism Arising From the COVID-19 Pandemic By blog.richmond.edu Published On :: Sat, 04 Apr 2020 20:26:56 +0000 By Scott T. Allison In any tragedy or crisis, you will see many people standing out and stepping up to save lives and make the world a better place. These heroic individuals can range from leaders of nations to ordinary citizens who rise to the occasion to help others in need. During this COVID-19 pandemic, … Continue reading 10 Examples of Heroism Arising From the COVID-19 Pandemic → Full Article Commentary and Analysis COVID19 heroes
f The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy By blog.richmond.edu Published On :: Mon, 20 Apr 2020 13:06:15 +0000 By Scott T. Allison Devs is the ideal TV mini-series for people to sink their teeth into, for many reasons: (1) It’s both science and science-fiction; (2) it’s brilliant mix of psychology, philosophy, religion, and technology; (3) it tantalizes us with the mysteries of love, life, death, time, and space; and (4) it features a … Continue reading The Miniseries ‘Devs’ Delivers a Delicious Dose of Heroism and Villainy → Full Article Commentary and Analysis
f Heroism Science: Call for Papers, Special Issue: The Heroism of Whistleblowers By blog.richmond.edu Published On :: Tue, 21 Apr 2020 17:11:14 +0000 Heroism Science: Call for Papers, Special Issue The Heroism of Whistleblowers Edited by Ari Kohen, Brian Riches, and Matt Langdon Whistleblowers speak up with “concerns or information about wrongdoing inside organizations and institutions.” As such, whistleblowing “can be one of the most important and difficult forms of heroism in modern society” (Brown, 2016 p. 1). … Continue reading Heroism Science: Call for Papers, Special Issue: The Heroism of Whistleblowers → Full Article Activist Heroes
f No-Failure Design and Disaster Recovery: Lessons from Fukushima By decisions-and-info-gaps.blogspot.com Published On :: Tue, 09 Aug 2011 10:21:00 +0000 One of the striking aspects of the early stages of the nuclear accident at Fukushima-Daiichi last March was the nearly total absence of disaster recovery capability. For instance, while Japan is a super-power of robotic technology, the nuclear authorities had to import robots from France for probing the damaged nuclear plants. Fukushima can teach us an important lesson about technology.The failure of critical technologies can be disastrous. The crash of a civilian airliner can cause hundreds of deaths. The meltdown of a nuclear reactor can release highly toxic isotopes. Failure of flood protection systems can result in vast death and damage. Society therefore insists that critical technologies be designed, operated and maintained to extremely high levels of reliability. We benefit from technology, but we also insist that the designers and operators "do their best" to protect us from their dangers.Industries and government agencies who provide critical technologies almost invariably act in good faith for a range of reasons. Morality dictates responsible behavior, liability legislation establishes sanctions for irresponsible behavior, and economic or political self-interest makes continuous safe operation desirable.The language of performance-optimization − not only doing our best, but also achieving the best − may tend to undermine the successful management of technological danger. A probability of severe failure of one in a million per device per year is exceedingly − and very reassuringly − small. When we honestly believe that we have designed and implemented a technology to have vanishingly small probability of catastrophe, we can honestly ignore the need for disaster recovery.Or can we?Let's contrast this with an ethos that is consistent with a thorough awareness of the potential for adverse surprise. We now acknowledge that our predictions are uncertain, perhaps highly uncertain on some specific points. We attempt to achieve very demanding outcomes − for instance vanishingly small probabilities of catastrophe − but we recognize that our ability to reliably calculate such small probabilities is compromised by the deficiency of our knowledge and understanding. We robustify ourselves against those deficiencies by choosing a design which would be acceptable over a wide range of deviations from our current best understanding. (This is called "robust-satisficing".) Not only does "vanishingly small probability of failure" still entail the possibility of failure, but our predictions of that probability may err.Acknowledging the need for disaster recovery capability (DRC) is awkward and uncomfortable for designers and advocates of a technology. We would much rather believe that DRC is not needed, that we have in fact made catastrophe negligible. But let's not conflate good-faith attempts to deal with complex uncertainties, with guaranteed outcomes based on full knowledge. Our best models are in part wrong, so we robustify against the designer's bounded rationality. But robustness cannot guarantee success. The design and implementation of DRC is a necessary part of the design of any critical technology, and is consistent with the strategy of robust satisficing.One final point: moral hazard and its dilemma. The design of any critical technology entails two distinct and essential elements: failure prevention and disaster recovery. What economists call a `moral hazard' exists since the failure prevention team might rely on the disaster-recovery team, and vice versa. Each team might, at least implicitly, depend on the capabilities of the other team, and thereby relinquish some of its own responsibility. Institutional provisions are needed to manage this conflict.The alleviation of this moral hazard entails a dilemma. Considerations of failure prevention and disaster recovery must be combined in the design process. The design teams must be aware of each other, and even collaborate, because a single coherent system must emerge. But we don't want either team to relinquish any responsibility. On the one hand we want the failure prevention team to work as though there is no disaster recovery, and the disaster recovery team should presume that failures will occur. On the other hand, we want these teams to collaborate on the design.This moral hazard and its dilemma do not obviate the need for both elements of the design. Fukushima has taught us an important lesson by highlighting the special challenge of high-risk critical technologies: design so failure cannot occur, and prepare to respond to the unanticipated. Full Article
f (Even) God is a Satisficer By decisions-and-info-gaps.blogspot.com Published On :: Fri, 12 Aug 2011 11:44:00 +0000 To 'satisfice' means "To decide on and pursue a course of action that will satisfy the minimum requirements necessary to achieve a particular goal." (Oxford English Dictionary). Herbert Simon (1978 Nobel Prize in Economics) was the first to use the term in this technical sense, which is an old alteration of the ordinary English word "satisfy". Simon wrote (Psychological Review, 63(2), 129-138 (1956)) "Evidently, organisms adapt well enough to 'satisfice'; they do not, in general, 'optimize'." Agents satisfice, according to Simon, due to limitation of their information, understanding, and cognitive or computational ability. These limitations, which Simon called "bounded rationality", force agents to look for solutions which are good enough, though not necessarily optimal. The optimum may exist but it cannot be known by the resource- and information-limited agent.There is a deep psychological motivation for satisficing, as Barry Schwartz discusses in Paradox of Choice: Why More Is Less. "When people have no choice, life is almost unbearable." But as the number and variety of choices grows, the challenge of deciding "no longer liberates, but debilitates. It might even be said to tyrannize." (p.2) "It is maximizers who suffer most in a culture that provides too many choices" (p.225) because their expectations cannot be met, they regret missed opportunities, worry about social comparison, and so on. Maximizers may acquire or achieve more than satisficers, but satisficers will tend to be happier.Psychology is not the only realm in which satisficing finds its roots. Satisficing - as a decision strategy - has systemic or structural advantages that suggest its prevalence even in situations where the complexity of the human psyche is irrelevant. We will discuss an example from the behavior of animals.Several years ago an ecological colleague of mine at the Technion, Prof. Yohay Carmel, posed the following question: Why do foraging animals move from one feeding site to another later than would seem to be suggested by strategies aimed at maximizing caloric intake? Of course, animals have many goals in addition to foraging. They must keep warm (or cool), evade predators, rest, reproduce, and so on. Many mathematical models of foraging by animals attempt to predict "patch residence times" (PRTs): how long the animal stays at one feeding patch before moving to the next one. A common conclusion is that patch residence times are under-predicted when the model assumes that the animal tries to maximize caloric intake. Models do exist which "patch up" the PRT paradox, but the quandary still exists.Yohay and I wrote a paper in which we explored a satisficing - rather than maximizing - model for patch residence time. Here's the idea. The animal needs a critical amount of energy to survive until the next foraging session. More food might be nice, but it's not necessary for survival. The animal's foraging strategy must maximize the confidence in achieving the critical caloric intake. So maximization is taking place, but not maximization of the substantive "good" (calories) but rather maximization of the confidence (or reliability, or likelihood, but these are more technical terms) of meeting the survival requirement. We developed a very simple foraging model based on info-gap theory. The model predicts that PRTs for a large number of species - including invertebrates, birds and mammals - tended to be longer (and thus more realistic) than predicted by energy-maximizing models.This conclusion - that satisficing predicts observed foraging times better than maximizing - is tentative and preliminary (like most scientific conclusions). Nonetheless, it seems to hold a grain of truth, and it suggests an interesting idea. Consider the following syllogism.1. Evolution selects those traits that enhance the chance of survival.2. Animals seem to have evolved strategies for foraging which satisfice (rather than maximize) the energy intake.3. Hence satisficing seems to be competitively advantageous. Satisficing seems to be a better bet than maximizing.Unlike my psychologist colleague Barry Schwartz, we are not talking about happiness or emotional satisfaction. We're talking about survival of dung flies or blue jays. It seems that aiming to do good enough, but not necessarily the best possible, is the way the world is made.And this brings me to the suggestion that (even) God is a satisficer. The word "good" appears quite early in the Bible: in the 4th verse of the 1st chapter of Genesis, the very first book: "And God saw the light [that had just been created] that it was good...". At this point, when the world is just emerging out of tohu v'vohu (chaos), we should probably understand the word "good" as a binary category, as distinct from "bad" or "chaos". The meaning of "good" is subsequently refined through examples in the coming verses. God creates dry land and oceans and sees that it is good (1:10). Grass and fruit trees are seen to be good (1:12). The sun and moon are good (1:16-18). Swarming sea creatures, birds, and beasts are good (1:20-21, 25).And now comes a real innovation. God reviews the entire creation and sees that it is very good (1:31). It turns out that goodness comes in degrees; it's not simply binary: good or bad. "Good" requires judgment; ethics is born. But what particularly interests me here is that God's handiwork isn't excellent. Shouldn't we expect the very best? I'll leave this question to the theologians, but it seems to me that God is a satisficer. Full Article
f The Pains of Progress By decisions-and-info-gaps.blogspot.com Published On :: Fri, 30 Sep 2011 07:04:00 +0000 To measure time by how little we change is to find how little we've lived, but to measure time by how much we've lost is to wish we hadn't changed at all. Andre AcimanThe last frontier is not the Antarctic, or the oceans, or outer space. The last frontier is The Unknown. We mentioned in an earlier essay that uncertainty - which makes baseball and life interesting - is inevitable in the human world. Life will continue to be interesting as long as the world is rich in unknowns, waiting to be discovered. Progress is possible if propitious discoveries can be made. Progress, however, comes with costs.The emblem of my university entwines a billowing smokestack and a cogwheel in the first letter of the institution's name. When this emblem was adopted (probably in 1951) these were optimistic symbols of progress. Cogwheels are no longer 'hi-tech' (though we still need them), and smoke has been banished from polite company. But our emblem is characteristic of industrial society which has seared Progress on our hearts and minds.Progress is accompanied by painful tensions. On the one hand, progress is nurtured by stability, cooperation, and leisure. On the other hand, progress grows out of change, conflict, and stress. A society's progressiveness reflects its balance of each of these three pairs of attributes. In the most general terms, progressiveness reflects social and individual attitudes to uncertainty.Let's consider the three pairs of attributes one at a time.Change and stability. Not all change is progress, but all progress is change. Change is necessary for progress, by definition, and progress can be very disruptive. The disruptiveness sometimes arises from unexpected consequences. J.B.S. Haldane wrote in 1923 that "the late war is only an example of the disruptive result that we may constantly expect from the progress of science." On the other hand, progressives employ and build on existing capabilities. The entrepreneur depends on stable property rights before risking venture capital. The existing legal system is used to remove social injustice. Watt's steam engine extended Newcomen's more primitive model. The new building going up on campus next to my office is very disruptive, but the construction project depends on the continuity of the university despite the drilling and dust. Even revolutionaries exploit and react against the status quo, which must exist for a revolutionary to be able to revolt. (One can't revolt if nothing is revolting.) Progress grows from a patch of opportunity in a broad bed of certainty, and spreads out in unanticipated directions.Conflict and cooperation. Conflict between vested interests and innovators is common. Watt protected his inventions with extensive patents which may have actually retarded the further development and commercialization of steam power. Conflict is also a mechanism for selecting successful ideas. Darwinian evolution and its social analogies proceed by more successful adaptations replacing less successful ones. On the other hand, cooperation enables specialization and expertise which are needed for innovation. The tool-maker cooperates with the farmer so better tools can be made more quickly, enhancing the farmer's productivity and the artisan's welfare. Conflicts arise over what constitutes progress. Stem cell research, genetic engineering, nuclear power technology: progress or plague? Cooperative collective decision making enables the constructive resolution of these value-based conflicts.Stress and leisure. Challenge, necessity and stress all motivate innovation. If you have no problems, you are unlikely to be looking for solutions. On the other hand, the leisure to think and tinker is a great source of innovation. Subsistence societies have no resources for invention. In assessing the implications of industrial efficiency, Bertrand Russell praised idleness in 1932, writing: "In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving ...." Stress is magnified by the unknown consequences of the stressor, while leisure is possible only in the absence of fear.New replaces Old. Yin and yang are complementary opposites that dynamically interact. In Hegel's dialectic, tension between contradictions is resolved by synthesis. Human history is written by the victors, who sometimes hardly mention those swept into Trotsky's "dustbin of history". "In the evening resides weeping; in the morning: joy." (Psalm 30:6). Change and stability; conflict and cooperation; stress and leisure.No progress without innovation; no innovation without discovery; no discovery without the unknown; no unknown without fear. There is no progress without pain. Full Article change and stability conflict and cooperation costs of progress progress stress and leisure
f The End of Science? By decisions-and-info-gaps.blogspot.com Published On :: Mon, 24 Oct 2011 07:21:00 +0000 Science is the search for and study of patterns and laws in the natural and physical worlds. Could that search become exhausted, like an over-worked coal vein, leaving nothing more to be found? Could science end? After briefly touching on several fairly obvious possible end-games for science, we explore how the vast Unknown could undermine - rather than underlie - the scientific enterprize. The possibility that science could end is linked to the reason that science is possible at all. The path we must climb in this essay is steep, but the (in)sight is worth it.Science is the process of discovering unknowns, one of which is the extent of Nature's secrets. It is possible that the inventory of Nature's unknowns is finite or conceivably even nearly empty. However, a look at open problems in science, from astronomy to zoology, suggests that Nature's storehouse of surprises is still chock full. So, from this perspective, the answer to the question 'Could science end?' is conceivably 'Yes', but most probably 'No'.Another possible 'Yes' answer is that science will end by reaching the limit of human cognitive capability. Nature's storehouse of surprises may never empty out, but the rate of our discoveries may gradually fall, reaching zero when scientists have figured out everything that humans are able to understand. Possible, but judging from the last 400 years, it seems that we've only begun to tap our mind's expansive capability.Or perhaps science - a product of human civilization - will end due to historical or social forces. The simplest such scenario is that we blow ourselves to smithereens. Smithereens can't do science. Another more complicated scenario is Oswald Spengler's theory of cyclical history, whereby an advanced society - such as Western civilization - decays and disappears, science disappearing with it. So again a tentative 'Yes'. But this might only be an interruption of science if later civilizations resume the search.We now explore the main mechanism by which science could become impossible. This will lead to deeper understanding of the delicate relation between knowledge and the Unknown and to why science is possible at all.One axiom of science is that there exist stable and discoverable laws of nature. As the philosopher A.N. Whitehead wrote in 1925: "Apart from recurrence, knowledge would be impossible; for nothing could be referred to our past experience. Also, apart from some regularity of recurrence, measurement would be impossible." (Science and the Modern World, p.36). The stability of phenomena is what allows a scientist to repeat, study and build upon the work of other scientists. Without regular recurrence there would be no such thing as a discoverable law of nature.However, as David Hume explained long ago in An Enquiry Concerning Human Understanding, one can never empirically prove that regular recurrence will hold in the future. By the time one tests the regularity of the future, that future has become the past. The future can never be tested, just as one can never step on the rolled up part of an endless rug unfurling always in front of you.Suppose the axiom of Natural Law turns out to be wrong, or suppose Nature comes unstuck and its laws start "sliding around", changing. Science would end. If regularity, patterns, and laws no longer exist, then scientific pursuit of them becomes fruitless.Or maybe not. Couldn't scientists search for the laws by which Nature "slides around"? Quantum mechanics seems to do just that. For instance, when a polarized photon impinges on a polarizing crystal, the photon will either be entirely absorbed or entirely transmitted, as Dirac explained. The photon's fate is not determined by any law of Nature (if you believe quantum mechanics). Nature is indeterminate in this situation. Nonetheless, quantum theory very accurately predicts the probability that the photon will be transmitted, and the probability that it will be absorbed. In other words, quantum mechanics establishes a deterministic law describing Nature's indeterminism.Suppose Nature's indeterminism itself becomes lawless. Is that conceivable? Could Nature become so disorderly, so confused and uncertain, so "out of joint: O, cursed spite", that no law can "set it right"? The answer is conceivably 'Yes', and if this happens then scientists are all out of a job. To understand how this is conceivable, one must appreciate the Unknown at its most rambunctious.Let's take stock. We can identify attributes of Nature that are necessary for science to be possible. The axiom of Natural Law is one necessary attribute. The successful history of science suggests that the axiom of Natural Law has held firmly in the past. But that does not determine what Nature will be in the future.In order to understand how Natural Law could come unstuck, we need to understand how Natural Law works (today). When a projectile, say a baseball, is thrown from here to there, its progress at each point along its trajectory is described, scientifically, in terms of its current position, direction of motion, and attributes such as its shape, mass and surrounding medium. The Laws of Nature enable the calculation of the ball's progress by solving a mathematical equation whose starting point is the current state of the ball.We can roughly describe most Laws of Nature as formulations of problems - e.g. mathematical equations - whose input is the current and past states of the system in question, and whose solution predicts an outcome: the next state of the system. What is law-like about this is that these problems - whose solution describes a progression, like the flight of a baseball - are constant over time. The scientist calculates the baseball's trajectory by solving the same problem over and over again (or all at once with a differential equation). Sometimes the problem is hard to solve, so scientists are good mathematicians, or they have big computers, (or both). But solvable they are.Let's remember that Nature is not a scientist, and Nature does not solve a problem when things happen (like baseballs speeding to home plate). Nature just does it. The scientist's Law is a description of Nature, not Nature itself.There are other Laws of Nature for which we must modify the previous description. In these cases, the Law of Nature is, as before, the formulation of a problem. Now, however, the solution of the problem not only predicts the next state of the system, but it also re-formulates the problem that must be solved at the next step. There is sort of a feedback: the next state of the system alters the rule by which subsequent progress is made. For instance, when an object falls towards earth from outer space, the law of nature that determines the motion of the object depends on the gravitational attraction. The gravitational attraction, in turn, increases as the object gets closer. Thus the problem to be solved changes as the object moves. Problems like these tend to be more difficult to solve, but that's the scientist's problem (or pleasure).Now we can appreciate how Nature might become lawlessly unstuck. Let's consider the second type of Natural Law, where the problem - the Law itself - gets modified by the evolving event. Let's furthermore suppose that the problem is not simply difficult to solve, but that no solution can be obtained in a finite amount of time (mathematicians have lots of examples of problems like this). As before, Nature itself does not solve a problem; Nature just does it. But the scientist is now in the position that no prediction can be made, no trajectory can be calculated, no model or description of the phenomenon can be obtained. No explicit problem statement embodying a Natural Law exists. This is because the problem to be solved evolves continuously from previous solutions, and none of the sequence of problems can be solved. The scientist's profession will become frustrating, futile and fruitless.Nature becomes lawlessly unstuck, and science ends, if all Laws of Nature become of the modified second type. The world itself will continue because Nature solves no problems, it just does its thing. But the way it does this is now so raw and unruly that no study of nature can get to first base.Sound like science fiction (or nightmare)? Maybe. But as far as we know, the only thing between us and this new state of affairs is the axiom of Natural Law. Scientists assume that Laws exist and are stable because past experience, together with our psychological makeup (which itself is evolutionary past experience), very strongly suggests that regular recurrence can be relied upon. But if you think that the scientists can empirically prove that the future will continue to be lawful, like the past, recall that all experience is past experience. Recall the unfurling-rug metaphor (by the time we test the future it becomes the past), and make an appointment to see Mr Hume.Is science likely to become fruitless or boring? No. Science thrives on an Unknown that is full of surprises. Science - the search for Natural Laws - thrives even though the existence of Natural Law can never be proven. Science thrives precisely because we can never know for sure that science will not someday end. Full Article
f The Language of Science and the Tower of Babel By decisions-and-info-gaps.blogspot.com Published On :: Mon, 31 Oct 2011 06:23:00 +0000 And God said: Behold one people with one language for them all ... and now nothing that they venture will be kept from them. ... [And] there God mixed up the language of all the land. (Genesis, 11:6-9)"Philosophy is written in this grand book the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read the alphabet in which it is composed. It is written in the language of mathematics." Galileo GalileiLanguage is power over the unknown. Mathematics is the language of science, and computation is the modern voice in which this language is spoken. Scientists and engineers explore the book of nature with computer simulations of swirling galaxies and colliding atoms, crashing cars and wind-swept buildings. The wonders of nature and the powers of technological innovation are displayed on computer screens, "continually open to our gaze." The language of science empowers us to dispel confusion and uncertainty, but only with great effort do we change the babble of sounds and symbols into useful, meaningful and reliable communication. How we do that depends on the type of uncertainty against which the language struggles.Mathematical equations encode our understanding of nature, and Galileo exhorts us to learn this code. One challenge here is that a single equation represents an infinity of situations. For instance, the equation describing a flowing liquid captures water gushing from a pipe, blood coursing in our veins, and a droplet splashing from a puddle. Gazing at the equation is not at all like gazing at the droplet. Understanding grows by exposure to pictures and examples. Computations provide numerical examples of equations that can be realized as pictures. Computations can simulate nature, allowing us to explore at our leisure.Two questions face the user of computations: Are we calculating the correct equations? Are we calculating the equations correctly? The first question expresses the scientist's ignorance - or at least uncertainty - about how the world works. The second question reflects the programmer's ignorance or uncertainty about the faithfulness of the computer program to the equations. Both questions deal with the fidelity between two entities. However, the entities involved are very different and the uncertainties are very different as well.The scientist's uncertainty is reduced by the ingenuity of the experimenter. Equations make predictions that can be tested by experiment. For instance, Galileo predicted that small and large balls will fall at the same rate, as he is reported to have tested from the tower of Pisa. Equations are rejected or modified when their predictions don't match the experimenter's observation. The scientist's uncertainty and ignorance are whittled away by testing equations against observation of the real world. Experiments may be extraordinarily subtle or difficult or costly because nature's unknown is so endlessly rich in possibilities. Nonetheless, observation of nature remorselessly cuts false equations from the body of scientific doctrine. God speaks through nature, as it were, and "the Eternal of Israel does not deceive or console." (1 Samuel, 15:29). When this observational cutting and chopping is (temporarily) halted, the remaining equations are said to be "validated" (but they remain on the chopping block for further testing).The programmer's life is, in one sense, more difficult than the experimenter's. Imagine a huge computer program containing millions of lines of code, the accumulated fruit of thousands of hours of effort by many people. How do we verify that this computation faithfully reflects the equations that have ostensibly been programmed? Of course they've been checked again and again for typos or logical faults or syntactic errors. Very clever methods are available for code verification. Nonetheless, programmers are only human, and some infidelity may slip through. What remorseless knife does the programmer have with which to verify that the equations are correctly calculated? Testing computation against observation does not allow us to distinguish between errors in the equations, errors in the program, and compensatory errors in both.The experimenter compares an equation's prediction against an observation of nature. Like the experimenter, the programmer compares the computation against something. However, for the programmer, the sharp knife of nature is not available. In special cases the programmer can compare against a known answer. More frequently the programmer must compare against other computations which have already been verified (by some earlier comparison). The verification of a computation - as distinct from the validation of an equation - can only use other high-level human-made results. The programmer's comparisons can only be traced back to other comparisons. It is true that the experimenter's tests are intermediated by human artifacts like calipers or cyclotrons. Nonetheless, bedrock for the experimenter is the "reality out there". The experimenter's tests can be traced back to observations of elementary real events. The programmer does not have that recourse. One might say that God speaks to the experimenter through nature, but the programmer has no such Voice upon which to rely.The tower built of old would have reached the heavens because of the power of language. That tower was never completed because God turned talk into babble and dispersed the people across the land. Scholars have argued whether the story prescribes a moral norm, or simply describes the way things are, but the power of language has never been disputed.The tower was never completed, just as science, it seems, has a long way to go. Genius, said Edison, is 1 percent inspiration and 99 percent perspiration. A good part of the sweat comes from getting the language right, whether mathematical equations or computer programs.Part of the challenge is finding order in nature's bubbling variety. Each equation captures a glimpse of that order, adding one block to the structure of science. Furthermore, equations must be validated, which is only a stop-gap. All blocks crumble eventually, and all equations are fallible and likely to be falsified.Another challenge in science and engineering is grasping the myriad implications that are distilled into an equation. An equation compresses and summarizes, while computer simulations go the other way, restoring detail and specificity. The fidelity of a simulation to the equation is usually verified by comparing against other simulations. This is like the dictionary paradox: using words to define words.It is by inventing and exploiting symbols that humans have constructed an orderly world out of the confusing tumult of experience. With symbols, like with blocks in the tower, the sky is the limit. Full Article
f Fog of War By decisions-and-info-gaps.blogspot.com Published On :: Tue, 29 Nov 2011 07:01:00 +0000 "War is the realm of uncertainty;three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty."Carl von Clausewitz, On WarWhat makes a great general?Hannibal changed Carthaginian strategy from naval to land warfare, and beat the Romans in nearly every encounter. Julius Caesar commanded the undying loyalty of his officers and soldiers. Napoleon Bonaparte invented the modern concept of total war with a citizen army. Was their genius in strategy, or tactics, or logistics, or charisma? Or was it crude luck? Or was it the exploitation of uncertainty?War is profoundly influenced by technology, social organization, human psychology and political goals. Success in war requires understanding and control of these factors. War consumes vast human and material resources and demands "genius, improvisation, and energy of mind" as Winston Churchill said. And yet, Clausewitz writes: "No other human activity is so continuously or universally bound up with chance."Why? What does this imply about the successful military commander? What does it mean for human endeavor and history in general?Clausewitz uses the terms "chance" and "uncertainty", sometimes interchangeably, to refer to two different concepts. An event occurs by chance if it is unexpected, or its origin is unknown, or its impact is surprising. Adverse chance events provoke "uncertainty, the psychological state of discomfort from confusion or lack of information" (Katherine Herbig, reference below).Chance and uncertainty are dangerous because they subvert plans and diminish capabilities. Soldiers have been aware of both the dangers and the advantages of surprise since they first battered each other with sticks. Conventional military theorists aimed to avoid or ameliorate chance events by careful planning, military intelligence, training and discipline, communication, command and control. Clausewitz also recognized that steadfast faithfulness to mission and determination against adversity are essential in overcoming chance events and the debilitating effect of uncertainty. But "Clausewitz dismisses as worse than useless efforts to systematize warfare with rules and formulas. Such systems are falsely comforting, he says, because they reduce the imponderables of war to a few meagre certainties about minor matters" (Herbig). Clausewitz' most original contribution was in building a systematic theory of war in which the unavoidability of chance, and its opportunities, are central.Why is uncertainty (in the sense of lack of knowledge) unavoidable and fundamental in war? Clausewitz' answer is expressed in his metaphor of friction. As Herbig explains:"Friction is the decremental loss of effort and intention caused by human fallibility, compounded by danger and exhaustion. Like the mechanical phenomenon of friction that reduces the efficiency of machinery with moving parts, Clausewitz' friction reduces the efficiency of the war machine. It sums up all the little things that always go wrong to keep things from being done as easily and quickly as intended. ..."What makes friction more than a minor annoyance in war is its confounding with chance, which multiplies friction in random, unpredictable ways."War, like history, runs on the cumulative effect of myriad micro-events. Small failures are compounded because war is a coordinated effort of countless local but inter-dependent occurrences. Generals, like symphony conductors, choose the score and set the pace, but the orchestra plays the notes. A mis-tuned violin, or a drummer who mis-counts his entry, can ruin the show. Moses led the children of Israel out of Egypt, but he'd have looked pretty funny if they had scattered to the four winds. Moses' genius as a leader wasn't plied against Pharaoh (Moses had help there), but rather against endless bickering and revolt once they reached the desert.Uncertainty originates at the tactical rather than the strategic level. The general can't know countless local occurrences: a lost supply plane, failed equipment here, over-reaction there, or complacency someplace else. As an example, the New York Times reported on 27 November 2011:"The NATO air attack that killed at least two dozen Pakistani soldiers over the weekend reflected a fundamental truth about American-Pakistani relations when it comes to securing the unruly border with Afghanistan: the tactics of war can easily undercut the broader strategy that leaders of both countries say they share."The murky details complicated matters even more, with Pakistani officials saying the attack on two Pakistani border posts was unprovoked and Afghan officials asserting that Afghan and American commandos called in airstrikes after coming under fire from Pakistani territory."Central control is critical, but also profoundly limited by the micro-event texture of history.Conversely, uncertainty can be exploited at the tactical level by flexible and creative response to random opportunities. The field commander has local knowledge that enables decisive initiative: the sleeping sentinel, the bridge not destroyed, the deserted town. The general's brilliance is in forging a war machine whose components both exploit uncertainty and are resilient to surprise.Uncertainty is central in history at large, like in war, because they both emerge from the churning of individual events. In democratic societies, legislatures pass laws and executive branches formulate and implement policies. But only active participation of the citizenry brings life and reality to laws and policies. Conversely, citizen resistance or even apathy dooms the best policies to failure. This explains the failure of democratic institutions that are imported precipitously to countries with incompatible social and political traditions. Governments formulate policy, but implementation occurs in the context of social attitudes and historical memory. You can elect legislatures and presidents but you can't elect the public. Non-centralized beliefs and actions also dominate the behavior of industrial economies. The actions of countless households, firms and investors can vitiate the best laid plans of monetary and fiscal authorities. All this adds up to Clausewitz' concept of friction: global uncertainty accumulating from countless local deviations.In peace, like in war, the successful response to uncertainty is to face it, grapple with it, exploit it, restrain it, but never hope to abolish it. Uncertainty is inevitable, and sometimes even propitious. The propensity for war is the ugliest attribute of our species. Nonetheless, what we learn about uncertainty from the study of war applies to all our endeavors: in business, in politics and beyond. Waging peace demands the same staunchness, determination and inventive flexibility in the face of the unknown, as the successful pursuit of war.Main source:Katherine L. Herbig, 1989, Chance and Uncertainty in On War, in Michael Handel, ed., Clausewitz and Modern Strategy, Frank Cass, London, pp.95-116.See also:Peter Paret, 1976, Clausewitz and the State: The Man, His Theories, and His Times, re-issued 2007, Princeton University Press. Full Article
f Jabberwocky. Or: Grand Unified Theory of Uncertainty??? By decisions-and-info-gaps.blogspot.com Published On :: Mon, 19 Dec 2011 07:30:00 +0000 Jabberwocky, Lewis Carroll's whimsical nonsense poem, uses made-up words to create an atmosphere and to tell a story. "Billig", "frumious", "vorpal" and "uffish" have no lexical meaning, but they could have. The poem demonstrates that the realm of imagination exceeds the bounds of reality just as the set of possible words and meanings exceeds its real lexical counterpart.Uncertainty thrives in the realm of imagination, incongruity, and contradiction. Uncertainty falls in the realm of science fiction as much as in the realm of science. People have struggled with uncertainty for ages and many theories of uncertainty have appeared over time. How many uncertainty theories do we need? Lots, and forever. Would we say that of physics? No, at least not forever.Can you think inconsistent, incoherent, or erroneous thoughts? I can. (I do it quite often, usually without noticing.) For those unaccustomed to thinking incongruous thoughts, and who need a bit of help to get started, I can recommend thinking of "two meanings packed into one word like a portmanteau," like 'fuming' and 'furious' to get 'frumious' or 'snake' and 'shark' to get 'snark'.Portmanteau words are a start. Our task now is portmanteau thoughts. Take for instance the idea of a 'thingk':When I think a thing I've thought,I have often felt I oughtTo call this thing I think a "Thingk",Which ought to save a lot of ink.The participle is written "thingking",(Which is where we save on inking,)Because "thingking" says in just one word:"Thinking of a thought thing." Absurd!All this shows high-power abstraction.(That highly touted human contraption.)Using symbols with subtle feint,To stand for something which they ain't.Now that wasn't difficult: two thoughts at once. Now let those thoughts be contradictory. To use a prosaic example: thinking the unthinkable, which I suppose is 'unthingkable'. There! You did it. You are on your way to a rich and full life of thinking incongruities, fallacies and contradictions. We can hold in our minds thoughts of 4-sided triangles, parallel lines that intersect, and endless other seeming impossibilities from super-girls like Pippi Longstockings to life on Mars (some of which may actually be true, or at least possible).Scientists, logicians, and saints are in the business of dispelling all such incongruities, errors and contradictions. Banishing inconsistency is possible in science because (or if) there is only one coherent world. Belief in one coherent world and one grand unified theory is the modern secular version of the ancient monotheistic intuition of one universal God (in which saints tend to believe). Uncertainty thrives in the realm in which scientists and saints have not yet completed their tasks (perhaps because they are incompletable). For instance, we must entertain a wide range of conflicting conceptions when we do not yet know how (or whether) quantum mechanics can be reconciled with general relativity, or Pippi's strength reconciled with the limitations of physiology. As Henry Adams wrote:"Images are not arguments, rarely even lead to proof, but the mind craves them, and, of late more than ever, the keenest experimenters find twenty images better than one, especially if contradictory; since the human mind has already learned to deal in contradictions."The very idea of a rigorously logical theory of uncertainty is startling and implausible because the realm of the uncertain is inherently incoherent and contradictory. Indeed, the first uncertainty theory - probability - emerged many centuries after the invention of the axiomatic method in mathematics. Today we have many theories of uncertainty: probability, imprecise probability, information theory, generalized information theory, fuzzy logic, Dempster-Shafer theory, info-gap theory, and more (the list is a bit uncertain). Why such a long and diverse list? It seems that in constructing a logically consistent theory of the logically inconsistent domain of uncertainty, one cannot capture the whole beast all at once (though I'm uncertain about this).A theory, in order to be scientific, must exclude something. A scientific theory makes statements such as "This happens; that doesn't happen." Karl Popper explained that a scientific theory must contain statements that are at risk of being wrong, statements that could be falsified. Deborah Mayo demonstrated how science grows by discovering and recovering from error.The realm of uncertainty contains contradictions (ostensible or real) such as the pair of statements: "Nine year old girls can lift horses" and "Muscle fiber generates tension through the action of actin and myosin cross-bridge cycling". A logically consistent theory of uncertainty can handle improbabilities, as can scientific theories like quantum mechanics. But a logical theory cannot encompass outright contradictions. Science investigates a domain: the natural and physical worlds. Those worlds, by virtue of their existence, are perhaps coherent in a way that can be reflected in a unified logical theory. Theories of uncertainty are directed at a larger domain: the natural and physical worlds and all imaginable (and unimaginable) other worlds. That larger domain is definitely not coherent, and a unified logical theory would seem to be unattainable. Hence many theories of uncertainty are needed.Scientific theories are good to have, and we do well to encourage the scientists. But it is a mistake to think that the scientific paradigm is suitable to all domains, in particular, to the study of uncertainty. Logic is a powerful tool and the axiomatic method assures the logical consistency of a theory. For instance, Leonard Savage argued that personal probability is a "code of consistency" for choosing one's behavior. Jim March compares the rigorous logic of mathematical theories of decision to strict religious morality. Consistency between values and actions is commendable says March, but he notes that one sometimes needs to deviate from perfect morality. While "[s]tandard notions of intelligent choice are theories of strict morality ... saints are a luxury to be encouraged only in small numbers." Logical consistency is a merit of any single theory, including a theory of uncertainty. However, insisting that the same logical consistency apply over the entire domain of uncertainty is like asking reality and saintliness to make peace. Full Article
f The Age of Imagination By decisions-and-info-gaps.blogspot.com Published On :: Mon, 09 Jan 2012 09:35:00 +0000 This is not only the Age of Information, this is also the Age of Imagination. Information, at any point in time, is bounded, while imagination is always unbounded. We are overwhelmed more by the potential for new ideas than by the admittedly vast existing knowledge. We are drunk with the excitement of the unknown. Drunks are sometimes not a pretty sight; Isaiah (28:8) is very graphic.It is true that topical specialization occurs, in part, due to what we proudly call the explosion of knowledge. There is so much to know that one must ignore huge tracts of knowledge. But that is only half the story. The other half is that we have begun to discover the unknown, and its lure is irresistible. Like the scientific and global explorers of the early modern period - The Discoverers as Boorstin calls them - we are intoxicated by the potential "out there", beyond the horizon, beyond the known. That intoxication can distort our vision and judgment.Consider Reuven's comment, from long experience, that "Engineers use formulas and various equations without being aware of the theories behind them." A pithier version was said to me by an acquisitions editor at Oxford University Press: "Engineers don't read books." She should know.Engineers are imaginative and curious. They are seekers, and they find wonderful things. But they are too engrossed in inventing and building The New, to be much engaged with The Old. "Scholarship", wrote Thorstein Veblen is "an intimate and systematic familiarity with past cultural achievements." Engineers - even research engineers and professors of engineering - spend very little time with past masters. How many computer scientists scour the works of Charles Babbage? How often do thermal engineers study the writings of Lord Kelvin? A distinguished professor of engineering, himself a member of the US National Academy of Engineering, once told me that there is little use for journal articles more than a few years old.Fragmentation of knowledge results from the endless potential for new knowledge. Seekers - engineers and the scientists of nature, society and humanity - move inexorably apart from one another. But nonetheless it's all connected; consilient. Technology alters how we live. Science alters what we think. How can we keep track of it all? How can we have some at least vague and preliminary sense of where we are heading and whether we value the prospect?The first prescription is to be aware of the problem, and I greatly fear that many movers and shakers of the modern age are unaware. The second prescription is to identify who should take the lead in nurturing this awareness. That's easy: teachers, scholars, novelists, intellectuals of all sorts.Isaiah struggled with this long ago. "Priest and prophet erred with liquor, were swallowed by wine."(Isaiah, 28:7) We are drunk with the excitement of the unknown. Who can show the way? Full Article
f Genesis for Engineers By decisions-and-info-gaps.blogspot.com Published On :: Sat, 28 Jan 2012 15:01:00 +0000 Technology has come a long way since Australopithecus first bruised their fingers chipping flint to make knives and scrapers. We are blessed to fruitfully multiply, to fill the world and to master it (Genesis 1:28). And indeed the trend of technological history is towards increasing mastery over our world. Inventors deliberately invent, but many inventions are useless or even harmful. Why is there progress and how certain is the process? Part of the answer is that good ideas catch on and bad ones get weeded out. Reality, however, is more complicated: what is 'good' or 'bad' is not always clear; unintended consequences cannot be predicted; and some ideas get lost while others get entrenched. Mastering the darkness and chaos of creation is a huge engineering challenge. But more than that, progress is painful and uncertain and the challenge is not only technological.An example of the weeding-out process, by which our mastery improves, comes to us in Hammurabi's code of law from 38 centuries ago:"If a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death. If it kill the son of the owner the son of that builder shall be put to death." (Articles 229-230)Builders who use inferior techniques, or who act irresponsibly, will be ruthlessly removed. Hammurabi's law doesn't say what techniques to use; it is a mechanism for selecting among techniques. As the level of competence rises and the rate of building collapse decreases, the law remains the same, implicitly demanding better performance after each improvement.Hammurabi's law establishes negative incentives that weed out faulty technologies. In contrast, positive incentives can induce beneficial invention. John Harrison (1693-1776) worked for years developing a clock for accurate navigation at sea, motivated by the Royal Society's 20,000 pound prize.Organizations, mores, laws and other institutions explain a major part of how good ideas catch on and how bad ones are abandoned. But good ideas can get lost as well. Jared Diamond relates that bow and arrow technologies emerged and then disappeared from pre-historic Australian cultures. Aboriginal mastery of the environment went up and then down. The mechanisms or institutions for selecting better tools do not always exist or operate.Valuable technologies can be "side-lined" as well, despite apparent advantages. The CANDU nuclear reactor technology, for instance, uses natural Uranium. No isotope enrichment is needed, so its fuel cycle is disconnected from Uranium enrichment for military applications (atom bombs use highly enriched Uranium or Plutonium). CANDU's two main technological competitors - pressurized and boiling water reactors - use isotope-enriched fuel. Nuclear experts argue long (and loud) about the merits of various technologies, but no "major" or "serious" accidents (INES levels 6 or 7) have occurred with CANDU reactors but have with PWRs or BWRs. Nonetheless, the CANDU is a minor contributor to world nuclear power.The long-run improvement of technology depends on incentives created by attitudes, organizations and institutions, like the Royal Society and the law. Technology modifies those attitudes and institutions, creating an interactive process whereby society influences technological development, and technology alters society. The main uncertainty in technological progress arises from unintended impacts of technology on mores, values and society as a whole. An example will make the point.Early mechanical clocks summoned the faithful to prayer in medieval monasteries. But technological innovations may be used for generations without anyone realizing their full implications, and so it was with the clock. The long-range influence of the mechanical clock on western civilization was the idea of "time discipline as opposed to time obedience. One can ... use public clocks to summon people for one purpose or another; but that is not punctuality. Punctuality comes from within, not from without. It is the mechanical clock that made possible, for better or for worse, a civilization attentive to the passage of time, hence to productivity and performance." (Landes, p.7)Unintended consequences of technology - what economists called "externalities" - can be beneficial or harmful. The unintended internalization of punctuality is beneficial (maybe). The clock example illustrates how our values gradually and unexpectedly change as a result of technological innovation. Environmental pollution and adverse climate change are harmful, even when they result from manufacturing beneficial consumer goods. Attitudes towards technological progress are beginning to change in response to perceptions of technologically-induced climate change. Pollution and climate change may someday seriously disrupt the technology-using societies that produced them. This disruption may occur either by altering social values, or by adverse material impacts, or both.Progress occurs in historical and institutional context. Hammurabi's Code created incentives for technological change; monastic life created needs for technological solutions. Progress is uncertain because we cannot know what will be invented, and whether it will be beneficial or harmful. Moreover, inventions will change our attitudes and institutions, and thus change the process of invention itself, in ways that we cannot anticipate. The scientific engineer must dispel the "darkness over the deep" (Genesis 1:2) because mastery comes from enlightenment. But in doing so we change both the world and ourselves. The unknown is not only over "the waters" but also in ourselves. Full Article
f We're Just Getting Started: A Glimpse at the History of Uncertainty By decisions-and-info-gaps.blogspot.com Published On :: Thu, 22 Mar 2012 19:12:00 +0000 We've had our cerebral cortex for several tens of thousands of years. We've lived in more or less sedentary settlements and produced excess food for 7 or 8 thousand years. We've written down our thoughts for roughly 5 thousand years. And Science? The ancient Greeks had some, but science and its systematic application are overwhelmingly a European invention of the past 500 years. We can be proud of our accomplishments (quantum theory, polio vaccine, powered machines), and we should worry about our destructive capabilities (atomic, biological and chemical weapons). But it is quite plausible, as Koestler suggests, that we've only just begun to discover our cerebral capabilities. It is more than just plausible that the mysteries of the universe are still largely hidden from us. As evidence, consider the fact that the main theories of physics - general relativity, quantum mechanics, statistical mechanics, thermodynamics - are still not unified. And it goes without say that the consilient unity of science is still far from us.What holds for science in general, holds also for the study of uncertainty. The ancient Greeks invented the axiomatic method and used it in the study of mathematics. Some medieval thinkers explored the mathematics of uncertainty, but it wasn't until around 1600 that serious thought was directed to the systematic study of uncertainty, and statistics as a separate and mature discipline emerged only in the 19th century. The 20th century saw a florescence of uncertainty models. Lukaczewicz discovered 3-valued logic in 1917, and in 1965 Zadeh introduced his work on fuzzy logic. In between, Wald formulated a modern version of min-max in 1945. A plethora of other theories, including P-boxes, lower previsions, Dempster-Shafer theory, generalized information theory and info-gap theory all suggest that the study of uncertainty will continue to grow and diversify.In short, we have learned many facts and begun to understand our world and its uncertainties, but the disputes and open questions are still rampant and the yet-unformulated questions are endless. This means that innovations, discoveries, inventions, surprises, errors, and misunderstandings are to be expected in the study or management of uncertainty. We are just getting started. Full Article
f Decoy Pricing: Did United Airlines Fire Their Behavioral Economist? By feeds.feedblitz.com Published On :: Tue, 29 Oct 2019 11:07:30 +0000 It appears that United Airlines has stopped using a classic decoy pricing approach for in-flight wifi options. The post Decoy Pricing: Did United Airlines Fire Their Behavioral Economist? appeared first on Neuromarketing. Full Article Neuromarketing decoy marketing decoy pricing pricing united airlines