July 14, 2003
NYT series on Humans and Nature
Kirk Johnson is writing a brilliant series this summer in the New York Times. Using the Long Island Sound as his theme he is writing a series of cases that look at the interactions between humans and that ecosystem. To my mind he has really captured the complexity and some of the paradoxes around our interactions with the natural functions of our planet. I encourage you to take a look at his work.
July 07, 2003
Carbon Management
Like it or not we are going to manage the carbon systems of our planet. In fact I would argue, we already do; albeit in a completely haphazard way.
The figure above shows one rendition of the various elements of the carbon system. There are many interacting and intersecting processes. These interactions make it difficult to characterize the time scale of the system, but the IPCC places an upper bound at about 200 years. Thus the impact of carbon that was emitted from the first coal fired industries at the beginning of the Industrial Revolution has only recently passed. Of course we have pumped plenty of CO2 into the atmosphere in its place.
There are a number of elements in the figure that are anthropogenic. Among them are various modes of transportation, energy production, and livestock production. Not shown as clearly are the effects of changes in landuse. Up to 25% of the anthropogenic influence on the carbon cycle is attributed to changes in land use and land cover.
Think about what happens when Amazonian farmers move into a new region of the forest. Leaving out the details, much of the carbon that was stored in the biomass of the plants is released into the atmosphere. In addition, as the ecosystem that the soils support changes so to does the amount and character of the carbon in the soils (I don't know which way the sign goes in the rainforest case. Part of the problem there is that the soils are pretty poor.)
I suppose one could take issue with my assertion that we are already "managing" the carbon system. "Just because we are changing it dramatically, does not necessarily mean we are managing it". My response would be something like "Fine, then we are simply mucking with it, with no idea of what we are up to. If anything this amplifies our need to take more conscious action." Wally Broecker sometimes makes the analogy of poking a beast with a sharp stick.
The Kyoto Protocol, as flawed as it may be, is an attempt to begin to purposefully address the changes that humans are making in how carbon moves through the various natural and anthropogenic systems of our planet. There are many questions around the Kyoto Protocol and whether it will ever come into force and if it does how well it will work.
I believe that, with respect to carbon management, a more likely scenario is one that relies heavily on private sector leadership. Most people that I have talked to think that it is quite possible that the private sector will lead in the development of at least carbon trading infrastructures. In fact such infrastructures are already being developed by companies and groups that are looking for first mover advantages. Many put the time frame of the emergence of carbon trading at less than a decade.
There are many details to be worked out; not the least of which is "how much is a ton of carbon equivalent (what ever that is) worth?" At the moment, my best range is about an order of magnitude between US$3 and US$30. Some people that I have talked to think the price range is much narrower than that and centered around US$5 per ton.
One of the features of the Kyoto protocol which will likely emerge either within that frame or in some other context is the idea of carbon offsets. In this an industry balances its emissions of CO2 by removing equal or greater amounts through carbon sequestration activities somewhere else. The simplest image is that of reforestation (although there are tricky accounting details having to do with why the forest is gone to begin with...). As trees grow they take carbon from the atmosphere an store it in their wood and leaves. As long as the tree lives it at least stores that carbon and the amount of carbon can be thought of as offsetting and emission somewhere else. Another detail to be worked out is an accounting mechanism between the emissions and the storage.
The figure above shows one rendition of the various elements of the carbon system. There are many interacting and intersecting processes. These interactions make it difficult to characterize the time scale of the system, but the IPCC places an upper bound at about 200 years. Thus the impact of carbon that was emitted from the first coal fired industries at the beginning of the Industrial Revolution has only recently passed. Of course we have pumped plenty of CO2 into the atmosphere in its place.
There are a number of elements in the figure that are anthropogenic. Among them are various modes of transportation, energy production, and livestock production. Not shown as clearly are the effects of changes in landuse. Up to 25% of the anthropogenic influence on the carbon cycle is attributed to changes in land use and land cover.
Think about what happens when Amazonian farmers move into a new region of the forest. Leaving out the details, much of the carbon that was stored in the biomass of the plants is released into the atmosphere. In addition, as the ecosystem that the soils support changes so to does the amount and character of the carbon in the soils (I don't know which way the sign goes in the rainforest case. Part of the problem there is that the soils are pretty poor.)
I suppose one could take issue with my assertion that we are already "managing" the carbon system. "Just because we are changing it dramatically, does not necessarily mean we are managing it". My response would be something like "Fine, then we are simply mucking with it, with no idea of what we are up to. If anything this amplifies our need to take more conscious action." Wally Broecker sometimes makes the analogy of poking a beast with a sharp stick.
The Kyoto Protocol, as flawed as it may be, is an attempt to begin to purposefully address the changes that humans are making in how carbon moves through the various natural and anthropogenic systems of our planet. There are many questions around the Kyoto Protocol and whether it will ever come into force and if it does how well it will work.
I believe that, with respect to carbon management, a more likely scenario is one that relies heavily on private sector leadership. Most people that I have talked to think that it is quite possible that the private sector will lead in the development of at least carbon trading infrastructures. In fact such infrastructures are already being developed by companies and groups that are looking for first mover advantages. Many put the time frame of the emergence of carbon trading at less than a decade.
There are many details to be worked out; not the least of which is "how much is a ton of carbon equivalent (what ever that is) worth?" At the moment, my best range is about an order of magnitude between US$3 and US$30. Some people that I have talked to think the price range is much narrower than that and centered around US$5 per ton.
One of the features of the Kyoto protocol which will likely emerge either within that frame or in some other context is the idea of carbon offsets. In this an industry balances its emissions of CO2 by removing equal or greater amounts through carbon sequestration activities somewhere else. The simplest image is that of reforestation (although there are tricky accounting details having to do with why the forest is gone to begin with...). As trees grow they take carbon from the atmosphere an store it in their wood and leaves. As long as the tree lives it at least stores that carbon and the amount of carbon can be thought of as offsetting and emission somewhere else. Another detail to be worked out is an accounting mechanism between the emissions and the storage.
June 24, 2003
A bit of recap
The theme of this blog is Earth systems management. The idea is to explore my ideas about what we know and what we need to learn about how our planet works. Building on that I want to begin to sketch out thoughts on how we might begin to organize ourselves in order to maximize the chance that human activities will be as benign as possible with respect to the health of our planet and the chance that generations to come will enjoy a good quality of life.
Tonight I am going to recap a bit on what I have written so far. Partly this is because my imagination is a bit dull and partly to help think about where to go next (are the two related?).
I have spent a fair amount of time on epistemological issues like modeling and rationality. The reason I have been working on these issues is that I want to establish a foundation for how we (scientists anyway) think about problems. While this is part of decision-making, it is very different from the kinds of things that decision scientists think about (e.g. I haven't talked at all (with the possible exception of the little bit on bounded rationality) about individual cognitive and psychological issues).
I have skirted around issues of complexity, but still haven't gotten to hierarchy or emergent phenomena. It is because I expect that to a certain extent global decisions will be emergent, that I am not so interested in individual cognitive processes. (This reflects some of my ideas with respect to scale; I am willing to accept individual brains/ minds as black boxes in the context of Earth systems management. (Note that this does not mean I thnk minds / brains are uninteresting in other contexts.))
I have talked about weather and climate (although I haven't done anything about it). Climate is the natural Earth systems that I know the most about and am most interested in, but it is not the only one. Biodiversity is another major Earth systems topic. Issues of ecosystems link biodiversity and climate and are also interesting in their own right.
I have written some about public policy and perhaps hinted at some political theory.
Some of the things that need to be reviewed in upcoming entries include:
- global institutions
- the Intergovernmental Panel on Climate Prediction
- the many facets of globalization
- more on political theory
- some entries on important systems and cycle (e.g. carbon cycle, food webs, ecosystem services)
- and much much more...
Stay tuned.
Tonight I am going to recap a bit on what I have written so far. Partly this is because my imagination is a bit dull and partly to help think about where to go next (are the two related?).
I have spent a fair amount of time on epistemological issues like modeling and rationality. The reason I have been working on these issues is that I want to establish a foundation for how we (scientists anyway) think about problems. While this is part of decision-making, it is very different from the kinds of things that decision scientists think about (e.g. I haven't talked at all (with the possible exception of the little bit on bounded rationality) about individual cognitive and psychological issues).
I have skirted around issues of complexity, but still haven't gotten to hierarchy or emergent phenomena. It is because I expect that to a certain extent global decisions will be emergent, that I am not so interested in individual cognitive processes. (This reflects some of my ideas with respect to scale; I am willing to accept individual brains/ minds as black boxes in the context of Earth systems management. (Note that this does not mean I thnk minds / brains are uninteresting in other contexts.))
I have talked about weather and climate (although I haven't done anything about it). Climate is the natural Earth systems that I know the most about and am most interested in, but it is not the only one. Biodiversity is another major Earth systems topic. Issues of ecosystems link biodiversity and climate and are also interesting in their own right.
I have written some about public policy and perhaps hinted at some political theory.
Some of the things that need to be reviewed in upcoming entries include:
- global institutions
- the Intergovernmental Panel on Climate Prediction
- the many facets of globalization
- more on political theory
- some entries on important systems and cycle (e.g. carbon cycle, food webs, ecosystem services)
- and much much more...
Stay tuned.
June 23, 2003
Stormy Weather
A nice little blurb in the times today gives a feel for how meteorologists think about the weather in statistical frames.
June weather in New York City is explainable by the location and shape of the jet stream, but no explanation for the shape of the jet stream is offered. It is noted that wet junes tend to be followed by dry augusts (italics added). The tendency is a statistical observation. It does not need any process explanation associated with it, it is simply something that people who follow weather statistics closely have noted.
The absence of a process explanation is a source of debate among scientists. On the one hand there is the more empirical group who for patterns in weather data and then develop methods for making statments about how robust those patterns are. On the other hand there is the more theoretical group who base their science in the physics of the atmosphere (and ocean); this group develops models based in fundamental physics and roots their understanding in the underlying causal mechanisms. Ideally these two groups will get to the same answers, but their methods are fundamentally different and they don't always see eye-to-eye.
Myself, if called to comment, I would probably agree that it has been a bit wet this June, but the truth be told I haven't minded much. And if really pushed I might ask how the average rainfall has changed over the last decade or so. Is it possible that the average hasn't changed much but the variance is getting larger?...
June weather in New York City is explainable by the location and shape of the jet stream, but no explanation for the shape of the jet stream is offered. It is noted that wet junes tend to be followed by dry augusts (italics added). The tendency is a statistical observation. It does not need any process explanation associated with it, it is simply something that people who follow weather statistics closely have noted.
The absence of a process explanation is a source of debate among scientists. On the one hand there is the more empirical group who for patterns in weather data and then develop methods for making statments about how robust those patterns are. On the other hand there is the more theoretical group who base their science in the physics of the atmosphere (and ocean); this group develops models based in fundamental physics and roots their understanding in the underlying causal mechanisms. Ideally these two groups will get to the same answers, but their methods are fundamentally different and they don't always see eye-to-eye.
Myself, if called to comment, I would probably agree that it has been a bit wet this June, but the truth be told I haven't minded much. And if really pushed I might ask how the average rainfall has changed over the last decade or so. Is it possible that the average hasn't changed much but the variance is getting larger?...
June 20, 2003
Temperature Changes - Part 2
The last post in this series focused on the roughly 100,000 year time scale of the glacial cycle. In this post I am going to zoom down 2 orders of magnitude to focus on the last roughly 1000 years.
The figure above shows data that reconstructs the temperature history of Earth over the last 1000 years. As with the Vostok record, the temperatures are represented as differences from some reference period. The data in this figure come primarily from tree ring records. In the case of tree rings, growth rates can be correlated with temperature and the rings provide a time marker. Some of the more recent portions of this record are based include instrumental measurements and other record that were kept in monasteries and similar long-lived institutions.
This record above is sometimes referred to as the hockey stick record. This refers to the fact that the trend of the temperature changes abruptly from cooling to warming sometime around 1900. The cooling indicated by the blue line shows a cooling rate that is comparable to the cooling rates that are typical of the beginning of previous glacial periods. The warming indicated by the red line shows a rate that is about 3 times faster than the warming rates the commonly ended a glacial period.
As in the glacial record, the most recent portion of this record is anomalous. The cooling of the first part of the last millennia is consistent with the beginning of another glacial period and indeed in the 1970's many scientist thought that we should be entering a new glacial period. That cooling ends very abruptly with a rate that is very fast compared to those at the ends of the previous glacial episodes. (I will return to this rate in a later post in this series.)
A close look at the figure above suggests that this rapid rate may have slowed recently. The following figure shows that while there is some variation, the temperature continues to warm.
The trend lines indicate the variation that can come with the choice of the window over which the average is calculated. The fastest rate starts after the period of roughly constant temperature in the late 19th century and ends at the end of the record in 2000. The slowest rate is that calculated over the entire record. That rate is consistent with the rate shown the in first figure.
The point here is that whatever the cause of the recent warming, its rate is very high compared to those warmings that marked the end of the last 4 glacial periods. It also reflects an abrupt change from the cooling of the previous 900 years. We may quibble whether this variation is natural or human induced, but whatever the cause, it is clearly anomalous with respect to glacial cycles, with respect to the last 10,000 years and with respect to the last 900 years.
The figure above shows data that reconstructs the temperature history of Earth over the last 1000 years. As with the Vostok record, the temperatures are represented as differences from some reference period. The data in this figure come primarily from tree ring records. In the case of tree rings, growth rates can be correlated with temperature and the rings provide a time marker. Some of the more recent portions of this record are based include instrumental measurements and other record that were kept in monasteries and similar long-lived institutions.
This record above is sometimes referred to as the hockey stick record. This refers to the fact that the trend of the temperature changes abruptly from cooling to warming sometime around 1900. The cooling indicated by the blue line shows a cooling rate that is comparable to the cooling rates that are typical of the beginning of previous glacial periods. The warming indicated by the red line shows a rate that is about 3 times faster than the warming rates the commonly ended a glacial period.
As in the glacial record, the most recent portion of this record is anomalous. The cooling of the first part of the last millennia is consistent with the beginning of another glacial period and indeed in the 1970's many scientist thought that we should be entering a new glacial period. That cooling ends very abruptly with a rate that is very fast compared to those at the ends of the previous glacial episodes. (I will return to this rate in a later post in this series.)
A close look at the figure above suggests that this rapid rate may have slowed recently. The following figure shows that while there is some variation, the temperature continues to warm.
The trend lines indicate the variation that can come with the choice of the window over which the average is calculated. The fastest rate starts after the period of roughly constant temperature in the late 19th century and ends at the end of the record in 2000. The slowest rate is that calculated over the entire record. That rate is consistent with the rate shown the in first figure.
The point here is that whatever the cause of the recent warming, its rate is very high compared to those warmings that marked the end of the last 4 glacial periods. It also reflects an abrupt change from the cooling of the previous 900 years. We may quibble whether this variation is natural or human induced, but whatever the cause, it is clearly anomalous with respect to glacial cycles, with respect to the last 10,000 years and with respect to the last 900 years.
June 19, 2003
Bush Climate Science
Andrew Revkin and Katherine Seelye had a disturbing piece in the Times today. In that article they report that the editing of an upcoming EPA report on the state of our environment has been heavily influenced by the White House. A major section on the likely impacts of climate change has been essentially removed. Ironically, among the bits that have been chopped out are references to a National Research Committee report that the Bush Administration itself commissioned.
Rather than a "summary statement about the potential impact of changes on human health and the environment", the report's section on Global Issues begins with a statement about how complex and tricky the issues are. While the statement is true, it distracts from the issue that we have no choice but to deal as best we can with that complexity. The failure to address the global environment and in particular the climate is explained by Bush appointees as avoiding a rush to judgment. When this is challenged they respond by saying essentially "please be patient, our comprehensive plan for addressing global climate change will be ready soon."
A little more than a year ago, Bush presented his new way of thinking about greenhouse gas emissions by introducing the concept of greenhouse gas intensity. While clever many authors have shown the slight of hand of this rhetoric, my own contributions are a short white paper and a video.
Aside
Yesterday I lectured my environmental policy class on Kingdon's notions of governmental and decision agendas. We talked about whether the environment is on the Bush agenda and I argued that it is rhetorically present, but not really occupying any one's time. Now I am not so sure. It seems to me that it is occupying time but in the negative sense; the Bush administration seems to be actively avoiding taking action.
End
There are a host of simple observations that even many of the skeptics can agree on that indicate that our planet's physical and biological systems are not functioning as that once did (e.g. my post of a few days ago on the Vostok core, or any host of observations about the distribution of mountain glaciers or plant life). Yet in a frightening petard hoisting, inevitable scientific uncertainty is being used to avoid addressing changes in the Earth system a responsible way.
We have become distracted in our quest to determine whether or not or how much of a change in a system is due to human activity as opposed to natural variability. In many ways it does not matter. We are vulnerable to changes in the climate independent of any assessment of blame. While it is true that changes in our fossil fuel consuming habits will take many decades to manifest themselves as mitigative forces in the climate system, there are other kinds of activities we could be undertaking that will have shorter term benefits.
What really frighten's me about political staff becoming involved in the editing of scientific documents is not the cynicism but the hubris. I hated Greek tragedy when I was forced to read it, but the pattern was always there.
Rather than a "summary statement about the potential impact of changes on human health and the environment", the report's section on Global Issues begins with a statement about how complex and tricky the issues are. While the statement is true, it distracts from the issue that we have no choice but to deal as best we can with that complexity. The failure to address the global environment and in particular the climate is explained by Bush appointees as avoiding a rush to judgment. When this is challenged they respond by saying essentially "please be patient, our comprehensive plan for addressing global climate change will be ready soon."
A little more than a year ago, Bush presented his new way of thinking about greenhouse gas emissions by introducing the concept of greenhouse gas intensity. While clever many authors have shown the slight of hand of this rhetoric, my own contributions are a short white paper and a video.
Aside
Yesterday I lectured my environmental policy class on Kingdon's notions of governmental and decision agendas. We talked about whether the environment is on the Bush agenda and I argued that it is rhetorically present, but not really occupying any one's time. Now I am not so sure. It seems to me that it is occupying time but in the negative sense; the Bush administration seems to be actively avoiding taking action.
End
There are a host of simple observations that even many of the skeptics can agree on that indicate that our planet's physical and biological systems are not functioning as that once did (e.g. my post of a few days ago on the Vostok core, or any host of observations about the distribution of mountain glaciers or plant life). Yet in a frightening petard hoisting, inevitable scientific uncertainty is being used to avoid addressing changes in the Earth system a responsible way.
We have become distracted in our quest to determine whether or not or how much of a change in a system is due to human activity as opposed to natural variability. In many ways it does not matter. We are vulnerable to changes in the climate independent of any assessment of blame. While it is true that changes in our fossil fuel consuming habits will take many decades to manifest themselves as mitigative forces in the climate system, there are other kinds of activities we could be undertaking that will have shorter term benefits.
What really frighten's me about political staff becoming involved in the editing of scientific documents is not the cynicism but the hubris. I hated Greek tragedy when I was forced to read it, but the pattern was always there.
June 17, 2003
Carbon Trading
Most of the people I talk to think that we will trade carbon. It is just a matter of when. I think that I agree with them.
I also get general agreement when I say that I think that it is likely that leadership in the development of carbon trading will come from the private sector. This is actually an interesting development. Most of the thinking about carbon trading is based on an analogy with the trading of sulfur permits in the US. Sulfur trading is part of the framework that was put in place to address issues associated with acid rain. In that framework the US government sets a total amount of sulfur that can be emitted into the atmosphere and sells rights to emit that much material. The price of the right to emit a unit of sulfur is then set by market mechanisms.
There are two ways for a company (say an electric utility) to manage the sulfur that it produces as atmospheric waste. First it can modify it practices so that there is less atmospheric sulfur waste. Modification might include switching to lower sulfur coal or developing new combustion technologies that trap sulfur before goes up the stack. Second it can buy permits to allow the needed sulfur emissions. In a market system, companies will choose which ever option is cheaper. Because the total amount of sulfur that can be emitted is limited and declining, there is incentive to develop new processes in order to avoid having to buy increasingly scarce permits.
Sulfur trading has been fairly successful. The total amount emitted has steadily decreased and, surprisingly, so has the price of permits. Prices have dropped because, the process changes that were encouraged by the structure of the framework have been very successful.
Aside
The story is a little more complicated because at the same time as sulfur permits were being implemented, railroads were being deregulated. Railroad deregulation had the effect of lowering the price of lower sulfur coal from the western states.
End
The sulfur system is called a "Cap and Trade" framework. The total amount of sulfur emissions is "capped" and companies then trade to account for differences in their needs and capacities. The price is initially set by the imposition of the Federal government of the cap. At the implementation point of the system an initial set of property rights is established. Setting the initial level and the initial distribution of property rights is a tricky political problem, but one that was solved in the case of sulfur.
Carbon is different from sulfur. In the case of sulfur and acid rain, the problem could be usefully addressed within the confines of a single nation-state; although there are interesting conflicts with Canada. The time and space scales of the sulfur problem are such that costs and benefits could be assessed and results could be seen in manageable time frames. None of these is true for carbon. Carbon mixes fairly quickly on a global scale and has a time scale in the atmosphere of centuries; thus no single nation can unilaterally address the problem and the benefits may take decades to accrue. Furthermore the details of the carbon cycle are much more complex than those of acid rain.
In some ways the Kyoto Protocol can be thought of as a cap and trade system. It attempts to put limits on the total amount of carbon that can be emitted into the atmosphere and it attempts to allocate initial property rights to those emissions. In this way the Kyoto Protocol would establish a framework in which the value of a ton of carbon could be established.
In a classical sense, some form of capping is necessary to establish a market for carbon, but it doesn't look like people are waiting for that to happen. The price of a ton of carbon is currently somewhere between $3 and $30. Some people I have talked to say the range is much smaller than that. It appears that a growing number of forward thinking companies expect that carbon will be managed in some way in the future. Those companies believe that first movers with respect to the capacity to trade and manage their carbon will have competitive advantage when that day comes. There are enough of those companies that there is a group associated with the Chicago Board of Trade that has established the Chicago Climate Exchange to handle the expected market in carbon trading. There are also large companies such as BP and DuPont that trade carbon internally.
Thus the capacity to trade is being developed in the absence of a regulatory framework. That capacity reflects the expectation by large, globally distributed, firms and groups of firms that carbon emissions are a liability that needs to be hedged. I expect this to continue to develop and that those who are hedging now will come out ahead of the game.
I also get general agreement when I say that I think that it is likely that leadership in the development of carbon trading will come from the private sector. This is actually an interesting development. Most of the thinking about carbon trading is based on an analogy with the trading of sulfur permits in the US. Sulfur trading is part of the framework that was put in place to address issues associated with acid rain. In that framework the US government sets a total amount of sulfur that can be emitted into the atmosphere and sells rights to emit that much material. The price of the right to emit a unit of sulfur is then set by market mechanisms.
There are two ways for a company (say an electric utility) to manage the sulfur that it produces as atmospheric waste. First it can modify it practices so that there is less atmospheric sulfur waste. Modification might include switching to lower sulfur coal or developing new combustion technologies that trap sulfur before goes up the stack. Second it can buy permits to allow the needed sulfur emissions. In a market system, companies will choose which ever option is cheaper. Because the total amount of sulfur that can be emitted is limited and declining, there is incentive to develop new processes in order to avoid having to buy increasingly scarce permits.
Sulfur trading has been fairly successful. The total amount emitted has steadily decreased and, surprisingly, so has the price of permits. Prices have dropped because, the process changes that were encouraged by the structure of the framework have been very successful.
Aside
The story is a little more complicated because at the same time as sulfur permits were being implemented, railroads were being deregulated. Railroad deregulation had the effect of lowering the price of lower sulfur coal from the western states.
End
The sulfur system is called a "Cap and Trade" framework. The total amount of sulfur emissions is "capped" and companies then trade to account for differences in their needs and capacities. The price is initially set by the imposition of the Federal government of the cap. At the implementation point of the system an initial set of property rights is established. Setting the initial level and the initial distribution of property rights is a tricky political problem, but one that was solved in the case of sulfur.
Carbon is different from sulfur. In the case of sulfur and acid rain, the problem could be usefully addressed within the confines of a single nation-state; although there are interesting conflicts with Canada. The time and space scales of the sulfur problem are such that costs and benefits could be assessed and results could be seen in manageable time frames. None of these is true for carbon. Carbon mixes fairly quickly on a global scale and has a time scale in the atmosphere of centuries; thus no single nation can unilaterally address the problem and the benefits may take decades to accrue. Furthermore the details of the carbon cycle are much more complex than those of acid rain.
In some ways the Kyoto Protocol can be thought of as a cap and trade system. It attempts to put limits on the total amount of carbon that can be emitted into the atmosphere and it attempts to allocate initial property rights to those emissions. In this way the Kyoto Protocol would establish a framework in which the value of a ton of carbon could be established.
In a classical sense, some form of capping is necessary to establish a market for carbon, but it doesn't look like people are waiting for that to happen. The price of a ton of carbon is currently somewhere between $3 and $30. Some people I have talked to say the range is much smaller than that. It appears that a growing number of forward thinking companies expect that carbon will be managed in some way in the future. Those companies believe that first movers with respect to the capacity to trade and manage their carbon will have competitive advantage when that day comes. There are enough of those companies that there is a group associated with the Chicago Board of Trade that has established the Chicago Climate Exchange to handle the expected market in carbon trading. There are also large companies such as BP and DuPont that trade carbon internally.
Thus the capacity to trade is being developed in the absence of a regulatory framework. That capacity reflects the expectation by large, globally distributed, firms and groups of firms that carbon emissions are a liability that needs to be hedged. I expect this to continue to develop and that those who are hedging now will come out ahead of the game.
June 16, 2003
El Nino and Love Canal
One spring day in 1977, Karen Schroeder saw from her window that the liner in the fiberglass swimming pool in her backyard had risen two feet above the ground as a consequence of the year's heavy precipitation. When the pool was removed that summer, the hole it left filled with water laden chemicals...
from A Hazardous Inquiry: the rashamon effect at Love Canal Allan Mazur, 1998
This was part of the beginning of the events that lead eventually to the evacuations of many residents around the infamous Love Canal chemical dump near Niagara Falls, New York.
Between 1941 and 1954, the Hooker Electrochemical Company dumped 25,000 pounds of chemical waste in an abandoned canal. These wastes were both solid and liquid; some, but not all, were contained in 55 gallon drums. Many of the waste products were benzene variants. There was also 120 lbs of dioxin.
Hooker knew that the materials they were dumping were toxic, but their was little understanding of the effects on humans. In reviewing the activities of Hooker in the legal aftermath of the evacuations, it became clear that the chemical company had, at the very least, conducted its waste dumping activities at the level of best practice for that period.
The canal that Hooker used was dug in the 1890s to tap into the power of the flowing water in the Niagara river. It was about 1/2 miles long, 8-16 feet deep and 60-80 feet wide. The lower parts of the canal were in clay which is fairly impermeable; above the clay, the material was fairly permeable. Hooker buried their wastes as they went, but the company-specified 4 feet of cover was not always maintained.
At the time they began dumping the canal was right at the edges of the westward expansion of development around Buffalo, NY. Hooker was concerned about its liability. When the property was transferred to the Board of Education, the deed explicitly acknowledged that chemicals were buried at the site and it seemed to be understood that excavation at the site would increase the likelihood of mobilizing the material that was buried there. In the 1960s and 1970s the area around the dump developed into working- and lower-class neighborhoods. Part of this development was the construction of a school and playgrounds on the dump site, building of houses that abutted the site and the installation of sewers and other infrastructure that, in fact, did involve excavations that compromised the integrity of the dump.
So where do floating swimming pools and El Nino come in?
1976 and 1977 were El Nino years and it rained a lot in upstate New York. That rained filled up the canal and floated Karen Schroeder's pool. As the canal filled up, it brought a lot of the stuff that had been buried there with it. Now the stuff had been there all along and there had been El Nino's since the building of the dump, but the extent of the rains was record setting and the neighborhoods had reached the point the Karen Schroeder had a pool along side the canal.
The disaster at Love Canal was going to happen; the heavy rains may have hastened the inevitable. The industrial and health standards of the 40s and 50s reflected the "infinite sinks" notion of waste management and the infancy the field of toxicology. The planning and early management of the site did not foresee the continued suburban development that would eventually surround the site. The waste management and toxicological naivete can be understood; the failure to project the suburban development, I think, is more difficult to explain.
The question in my mind is "what are the modern day analogs to the best practices that Hooker was using?"
June 13, 2003
Small, cheap, motorized scooters
In my neighborhood there is a bit of a summer fad going on that involves young men (mostly) zooming about on inexpensive (~$400) motorized scooters. I would trace the evolution of these scooters back to the foot powered ones that kids zoom about on (the progression involved putting motors on the scooter with the rider standing, followed by the addition of, bigger wheels, seats and fenders).
Beyond curiosity and a bit of middle-age grumbling, an interesting set of thoughts was triggered when I wondered where one would get parts and repair for these things. The answer I came up with is that you probably don't. These machines are cheap enough so that if they last the better part of the summer they are purchased at the beginning of, then they become essentially disposable and rapid dissemination can come in the absence of any infrastructure other than distribution of the machines themselves (e.g. UPS).
The key to this set of ideas seems to be a confluence of inexpensive design and increase in reliability. At point where disposability crosses price in some utility space, the need for a supporting infrastructure beyond distribution and disposal goes away.
Beyond curiosity and a bit of middle-age grumbling, an interesting set of thoughts was triggered when I wondered where one would get parts and repair for these things. The answer I came up with is that you probably don't. These machines are cheap enough so that if they last the better part of the summer they are purchased at the beginning of, then they become essentially disposable and rapid dissemination can come in the absence of any infrastructure other than distribution of the machines themselves (e.g. UPS).
The key to this set of ideas seems to be a confluence of inexpensive design and increase in reliability. At point where disposability crosses price in some utility space, the need for a supporting infrastructure beyond distribution and disposal goes away.
The Properties of a Good Consitution
Following The Economist:
A good constituion is short, simple and clear. It avoids attention to detail, but lays out a set of processes for making decisions and for giving attention to detail. The processes it lays out can be changed but not easily or without significant deliberations. A good constitution sets out the goals of the organization it guides and the means for achiveing those goals.
A constitution should avoit detail because it is likely to get much of it wrong. Constituions need to have long time scales in order to engender confidence that the governing framework is stable. Thus the details need to be handled at a "lower" level. It is a mistake to allow decision-making scales to become strongly linked. That is, details must be able to be adjusted as learning occurs without changing the constituion.
A good constituion is short, simple and clear. It avoids attention to detail, but lays out a set of processes for making decisions and for giving attention to detail. The processes it lays out can be changed but not easily or without significant deliberations. A good constitution sets out the goals of the organization it guides and the means for achiveing those goals.
A constitution should avoit detail because it is likely to get much of it wrong. Constituions need to have long time scales in order to engender confidence that the governing framework is stable. Thus the details need to be handled at a "lower" level. It is a mistake to allow decision-making scales to become strongly linked. That is, details must be able to be adjusted as learning occurs without changing the constituion.
June 12, 2003
Bounded Rationality
A classical conceptualization of decision making is the following:
This is a rational frame, but it relys on / assumes complete information and large computational capacity. In all but the simplest cases, the completeness of this scheme makes it unimplementable.
In place of this optimizing scheme, James March and Herbert Simon suggest that a better model is the following:
In this frame actions do not have complete information and different actors have different (and not necessarily consistent) information. Achieving a satifactory state can be accopmplished in two ways. First the activities and decisions that are undertaken can advance the system to a new and satisfactory state. Second, if progress toward a satisfactory conditions is too difficult or slow, the standards themselves may be adjusted to accept achievable states as satisfactory.
Innovation can be aded to this program by constraining when statisficing conditions can be changed and including exploratory behaviors in the portfolio of "familiar actions."
Aside
This sounds a lot like genetic algorithms - stand by while I explore that further.
End
Complex behaviors can be achieved without global knowledge at any single points if satificing levels, action programs and problems are parsed (broken into chunks) in the right way. Of course there is no prescription for how to find "the right way" and in a satisficing world, there are likely to be more than one. The details of the "complex behavior" will of course depend on which "right way" is found and operationalized.
- A set of goals is identified
- All possible parhts to the goals are articulated
- The complete portfolio of costs & benefits for all paths is calculated
- The path or paths which maiximizes (optimizes) benefit is chosen
This is a rational frame, but it relys on / assumes complete information and large computational capacity. In all but the simplest cases, the completeness of this scheme makes it unimplementable.
In place of this optimizing scheme, James March and Herbert Simon suggest that a better model is the following:
- Rather than optimal goals, satifactory conditions are set.
- Familiar modes of action are tried first, without global cost / benefit analysis
- To the extent that costs and benefits are considered, they are evaluated relative to local conditions using local knowledge
- When a satisfactory state is achieved, action stops; there is no attempt to maximize utility, only to attains some acceptable level.
In this frame actions do not have complete information and different actors have different (and not necessarily consistent) information. Achieving a satifactory state can be accopmplished in two ways. First the activities and decisions that are undertaken can advance the system to a new and satisfactory state. Second, if progress toward a satisfactory conditions is too difficult or slow, the standards themselves may be adjusted to accept achievable states as satisfactory.
Innovation can be aded to this program by constraining when statisficing conditions can be changed and including exploratory behaviors in the portfolio of "familiar actions."
Aside
This sounds a lot like genetic algorithms - stand by while I explore that further.
End
Complex behaviors can be achieved without global knowledge at any single points if satificing levels, action programs and problems are parsed (broken into chunks) in the right way. Of course there is no prescription for how to find "the right way" and in a satisficing world, there are likely to be more than one. The details of the "complex behavior" will of course depend on which "right way" is found and operationalized.
June 11, 2003
Temperature Changes - part 1
This is the first installment of a series of posts where I plan to discuss the current state of the climate system. In these posts I will use publicly available data and make simple observations about this temperature history of our planet. In this installment I am going to look at the glacial cycles of the past 400,000 (400K) years.
The figure above shows a record of the temperature of the atmosphere in Antarctica over the last 400K years. These temperatures are calculated from measurements of the concentrations of certain gasses that become trapped in small bubbles in the ice as it is compacted. The measure of temperature is not absolute; it is a difference from some reference period. In this case the reference is the average temperature over a number of recent decades. Note that time is indicated in years before present.
In the figure there are 4 glacial cycles labeled G1 through G4. Each of those cycles is about 100K years long. G1 the most recent glacial cycle ended about 10,000 years ago. Each of the four complete cycles starts with a fairly rapid decrease in temperature to temperatures that are 2 - 8 degrees Celsius colder than today. The cold periods are glacial periods. One thing that is striking is that while the temperature remains "cold" during a glacial period, there are fairly large variations. Each of the glacial cycles ends very abruptly with a large increase in temperature from fairly cold to levels that are perhaps a couple of degrees warmer than today.
In each of the cycle there is a blue line and a red line. The blue line represents the average rate of temperature decrease for the 10K years following the peak temperature. Each blue line is labeled with a negative number that gives the rate of temperature decrease in degrees per year. The red line in each cycle is the average rate of temperature increase for the abrupt ending of the cycle. The red lines are labeled with positive numbers that give the rate of temperature increase in degrees per year.
Aside
Remember that the "e" indicates scientific notation thus 1e-3 is 0.003.
End
Note that each of the cycles G1-G4 ends with temperature increase rates that are very similar - roughly 1.1e-3 degrees per year. While not quite as similar, the rates of temperature decrease that signal the beginning of each the cycles is about the same; G1, G3, and G4 begin with rates of temperature decrease of about 3.5e-4. G2 begins with temperatures falling at about twice that rate. The changes in temperature that mark the end of a cycle are more than 10 times as rapid as those that start the period. So cycles G1-G4 have very similar temperature histories. They start with rapid cooling of (roughly the same rate), they show oscillations during the glacial period which lasts about 100K years and they end with very rapid increases in temperature.
The short blue line at the very right of the figure represents the average temperature change over the time since the end of the last full glacial cycle. That average shows that temperature has been decreasing, but at a very slow rate compared to the previous cycles (about 10 time slower). Agriculture was invented at the very beginning of this period of relatively stable, and warm, temperatures; and all of the subsequent major developments in human history have occurred in times of historically stable temperatures.
The main point in this figure is that compared to the last 400K years, the recent 10K years are anomalous. If the pattern of the cycles G1 - G4 had repeated again, we would now have temperatures that would be much colder than than we now have.
Aside
Exercise for the reader What have I glossed over in the preceding?
End
The figure above shows a record of the temperature of the atmosphere in Antarctica over the last 400K years. These temperatures are calculated from measurements of the concentrations of certain gasses that become trapped in small bubbles in the ice as it is compacted. The measure of temperature is not absolute; it is a difference from some reference period. In this case the reference is the average temperature over a number of recent decades. Note that time is indicated in years before present.
In the figure there are 4 glacial cycles labeled G1 through G4. Each of those cycles is about 100K years long. G1 the most recent glacial cycle ended about 10,000 years ago. Each of the four complete cycles starts with a fairly rapid decrease in temperature to temperatures that are 2 - 8 degrees Celsius colder than today. The cold periods are glacial periods. One thing that is striking is that while the temperature remains "cold" during a glacial period, there are fairly large variations. Each of the glacial cycles ends very abruptly with a large increase in temperature from fairly cold to levels that are perhaps a couple of degrees warmer than today.
In each of the cycle there is a blue line and a red line. The blue line represents the average rate of temperature decrease for the 10K years following the peak temperature. Each blue line is labeled with a negative number that gives the rate of temperature decrease in degrees per year. The red line in each cycle is the average rate of temperature increase for the abrupt ending of the cycle. The red lines are labeled with positive numbers that give the rate of temperature increase in degrees per year.
Aside
Remember that the "e" indicates scientific notation thus 1e-3 is 0.003.
End
Note that each of the cycles G1-G4 ends with temperature increase rates that are very similar - roughly 1.1e-3 degrees per year. While not quite as similar, the rates of temperature decrease that signal the beginning of each the cycles is about the same; G1, G3, and G4 begin with rates of temperature decrease of about 3.5e-4. G2 begins with temperatures falling at about twice that rate. The changes in temperature that mark the end of a cycle are more than 10 times as rapid as those that start the period. So cycles G1-G4 have very similar temperature histories. They start with rapid cooling of (roughly the same rate), they show oscillations during the glacial period which lasts about 100K years and they end with very rapid increases in temperature.
The short blue line at the very right of the figure represents the average temperature change over the time since the end of the last full glacial cycle. That average shows that temperature has been decreasing, but at a very slow rate compared to the previous cycles (about 10 time slower). Agriculture was invented at the very beginning of this period of relatively stable, and warm, temperatures; and all of the subsequent major developments in human history have occurred in times of historically stable temperatures.
The main point in this figure is that compared to the last 400K years, the recent 10K years are anomalous. If the pattern of the cycles G1 - G4 had repeated again, we would now have temperatures that would be much colder than than we now have.
Aside
Exercise for the reader What have I glossed over in the preceding?
End
June 09, 2003
Monkey Pox
A number of Midwestern states have reported cases of monkey pox. Apparently these are the first cases of this disease to be reported in the western hemisphere. Fortunately, while monkey pox symptoms are similar to small pox, it is not as virulent and mortality in generally healthy populations is likely to be low.
I have written some on SARS and how it appears that that virus may have jumped from domestic animals to humans. The same appears to be true of monkey pox, but rather than pigs and chickens, the animals are Gambian rats and prairie dogs. The path appears to be from Gambian rat to prairie dog to human. Monkey pox makes the prairie dogs sick and can be fatal to those animals As reported by the AP it may be that most of the cases in the midwest can be traced back to a single exotic pet distributor in Chicago. That shop has been quarantined and many of its prairie dogs have been killed.
This brings to mind a number of thoughts. The first of which is "What in world are people thinking with respect to pets!?" Now I may be a bit curmudgeonly on this front, but I don't think that I am out of line thinking that it doesn't make a lot of sense to bring large African and Texan rodents together in close quarters and then to add a large primate to the mix.
Aside
Joel Cohen and his collegues figured out that huge decreases in Chagas disease could be achieved by having the livestock live outside and the humans live inside. While this seems obvious in the case of the rural Andes, it does not seem to be applied to suburban Chicago.
End
Two other things that come to mind are: 1) the seeming prevalence of disease that jumps from animals to humans; and 2) the global mixing of species. Quick surveys of the web suggest that monkey pox, here-to-fore, has been confined to central Africa. Now it has jumped directly to the metropolitan Midwest. There seems to be no question that the disease was transported by exotic pet traders from Africa to the US. I don't know whether to be amazed that our livestock controls have worked so well for so long or to be horrified at the thought of the geographically artificial inter-species mixing that is going on.
It is likely that this little outbreak will be controlled (provided people don't start turning their sick prairie dogs out into the wild where they can / will infect healthy indigenous populations). And it is in many ways a simple and relatively benign case of human foible and ecentricity. It does, none-the-less, illustrate the kind of thing that we need to manage as we go forward. In this case there are many regulations that control the flow of animals across borders and across ecological niches. That regulatory framework may need to be shored up, but it is one way to approach the problem.
But what happens when something slips through the regulatory net, as appears to have happened with monkey pox; or when the regulatory net is non-existent or inadequate as would be the case with SARS in Guang Dong? When that happens we need another set of mechanisms that are highly dynamic and that can react to a rapidly changing state of affairs. With disease it looks like the strategy is to isolate the disease and then eliminate it; this is the strategy behind quarantine. If a disease gets established in wild populations (as may be the case with SARS and is the fear regarding sick prairie dogs going into the wild) then isolation and eradication will not work and the strategy has to be one of controlling and limiting outbreaks. If we are lucky vaccines can be developed and human morbidity minimized.
Earth systems management is going to have to be able to deal with existing and emerging infectious diseases and the fact that these things will be moved around the globe in strange ways as a function of human weirdness.
A number of Midwestern states have reported cases of monkey pox. Apparently these are the first cases of this disease to be reported in the western hemisphere. Fortunately, while monkey pox symptoms are similar to small pox, it is not as virulent and mortality in generally healthy populations is likely to be low.
I have written some on SARS and how it appears that that virus may have jumped from domestic animals to humans. The same appears to be true of monkey pox, but rather than pigs and chickens, the animals are Gambian rats and prairie dogs. The path appears to be from Gambian rat to prairie dog to human. Monkey pox makes the prairie dogs sick and can be fatal to those animals As reported by the AP it may be that most of the cases in the midwest can be traced back to a single exotic pet distributor in Chicago. That shop has been quarantined and many of its prairie dogs have been killed.
This brings to mind a number of thoughts. The first of which is "What in world are people thinking with respect to pets!?" Now I may be a bit curmudgeonly on this front, but I don't think that I am out of line thinking that it doesn't make a lot of sense to bring large African and Texan rodents together in close quarters and then to add a large primate to the mix.
Aside
Joel Cohen and his collegues figured out that huge decreases in Chagas disease could be achieved by having the livestock live outside and the humans live inside. While this seems obvious in the case of the rural Andes, it does not seem to be applied to suburban Chicago.
End
Two other things that come to mind are: 1) the seeming prevalence of disease that jumps from animals to humans; and 2) the global mixing of species. Quick surveys of the web suggest that monkey pox, here-to-fore, has been confined to central Africa. Now it has jumped directly to the metropolitan Midwest. There seems to be no question that the disease was transported by exotic pet traders from Africa to the US. I don't know whether to be amazed that our livestock controls have worked so well for so long or to be horrified at the thought of the geographically artificial inter-species mixing that is going on.
It is likely that this little outbreak will be controlled (provided people don't start turning their sick prairie dogs out into the wild where they can / will infect healthy indigenous populations). And it is in many ways a simple and relatively benign case of human foible and ecentricity. It does, none-the-less, illustrate the kind of thing that we need to manage as we go forward. In this case there are many regulations that control the flow of animals across borders and across ecological niches. That regulatory framework may need to be shored up, but it is one way to approach the problem.
But what happens when something slips through the regulatory net, as appears to have happened with monkey pox; or when the regulatory net is non-existent or inadequate as would be the case with SARS in Guang Dong? When that happens we need another set of mechanisms that are highly dynamic and that can react to a rapidly changing state of affairs. With disease it looks like the strategy is to isolate the disease and then eliminate it; this is the strategy behind quarantine. If a disease gets established in wild populations (as may be the case with SARS and is the fear regarding sick prairie dogs going into the wild) then isolation and eradication will not work and the strategy has to be one of controlling and limiting outbreaks. If we are lucky vaccines can be developed and human morbidity minimized.
Earth systems management is going to have to be able to deal with existing and emerging infectious diseases and the fact that these things will be moved around the globe in strange ways as a function of human weirdness.
June 06, 2003
Real Cities
I was in Stockholm for a few days this week and one of the things I thought about was the following: "Why does the US have so few real cities?"
The central city ins Stockholm has only about 600,000 people so it is relatively small. I stayed in Gamla Stan, the "old town" and the hotel I was in trances its roots back to the 17th century, so the city is old. Gamla Stan is also a small island; the city is actually a network of islands.
But what is it that makes a real city? First and foremost, there are alto of people walking around; a functioning mass transit system is probably a corollary to this characteristic. In addition to pedestrians and transit, Stockholm has a well developed bicycle route network and culture.
Aside
The Arland Express, the train to the airport, puts anything we have in NYC to shame. It leaves from the terminals and runs high speed to the city center in 20 minutes. The contrast with the Newark monorail and NJ Transit is stark; two badly marked trains, long wait in a post-industrial landscape and a slow, dingy train to Penn Station (which I admit does qualify as the city center).
End
People live in real cities. Food is easy to find. There is retail at the street level and there are apartments above. There are museums and other cultural institutions. There is night life; that is everyone does not leave at the end of the work-day.
In the US clearly NYC qualifies. Boston probably does. San Francisco, DC and probably Chicago definitely do. LA, Raleigh and Miami don't
So what is my explanation? My guess is that the dominant explanatory variable is age. Real cities developed to a critical mass of people, housing stock and commerce before regular, long distance travel by people became common place; in particular before cars.
Aside
I recognize that cars are a relatively recent development, but so too is explosive population growth.
End
A certain degree of geopgraphic constraint probably helps. Manhattan is an island, as is Gamla Stan and much of the rest of Stockholm. London is a counterfactual. "Transportation hub" and "seat of government" probably also help.
The central city ins Stockholm has only about 600,000 people so it is relatively small. I stayed in Gamla Stan, the "old town" and the hotel I was in trances its roots back to the 17th century, so the city is old. Gamla Stan is also a small island; the city is actually a network of islands.
But what is it that makes a real city? First and foremost, there are alto of people walking around; a functioning mass transit system is probably a corollary to this characteristic. In addition to pedestrians and transit, Stockholm has a well developed bicycle route network and culture.
Aside
The Arland Express, the train to the airport, puts anything we have in NYC to shame. It leaves from the terminals and runs high speed to the city center in 20 minutes. The contrast with the Newark monorail and NJ Transit is stark; two badly marked trains, long wait in a post-industrial landscape and a slow, dingy train to Penn Station (which I admit does qualify as the city center).
End
People live in real cities. Food is easy to find. There is retail at the street level and there are apartments above. There are museums and other cultural institutions. There is night life; that is everyone does not leave at the end of the work-day.
In the US clearly NYC qualifies. Boston probably does. San Francisco, DC and probably Chicago definitely do. LA, Raleigh and Miami don't
So what is my explanation? My guess is that the dominant explanatory variable is age. Real cities developed to a critical mass of people, housing stock and commerce before regular, long distance travel by people became common place; in particular before cars.
Aside
I recognize that cars are a relatively recent development, but so too is explosive population growth.
End
A certain degree of geopgraphic constraint probably helps. Manhattan is an island, as is Gamla Stan and much of the rest of Stockholm. London is a counterfactual. "Transportation hub" and "seat of government" probably also help.
June 01, 2003
May 30, 2003
SARS - slight return
Aside
I am getting some great feedback on the modeling stuff, but I am feeling a bit frazzled tonight so I am going to shift gears briefly and let my thoughts on modeling simmer for a while.
End
SARS is back in the news. It seems like the disease has the potential for a bit of a second wind. The figure below summarizes my understanding of the disease. In individuals, there were reports of recurrence if the disease was not fully conquered in the first go-round. In communitities, there seems to be the potential for new outbreaks as well. Toronto is seeing a pretty good resurgence. Part of the bump in Toronto seems to have to do with how SARs is defined.
What is the relationship between the progress of the disease in individuals and in populations? The evolution of the disease in individuals is the domain of medicine; the evolution of the disease in populations is the domain of public health. My guess is that the stubbornness of the virus in individuals is related to the potential for new outbreaks. In part I think this because of the importance of restricting exposure in controlling spread - in order to contain the disease, each victim must infect less than one other person.
When I wrote about SARS previously, it looked like the mortality rate was reasonably low. Turns out that the mortality rate is higher than previously thought it may be as high as 17 - 20%. As might be expected, mortality is higher in younger and older people. For people over 60, the mortality is quite high. I am quite interested in the age structure of the mortality - is it typical for infectious diseases or is it unusual?
As I noted earlier, SARS is a coronavirus and it does seem to have moved between animals and humans.
I am getting some great feedback on the modeling stuff, but I am feeling a bit frazzled tonight so I am going to shift gears briefly and let my thoughts on modeling simmer for a while.
End
SARS is back in the news. It seems like the disease has the potential for a bit of a second wind. The figure below summarizes my understanding of the disease. In individuals, there were reports of recurrence if the disease was not fully conquered in the first go-round. In communitities, there seems to be the potential for new outbreaks as well. Toronto is seeing a pretty good resurgence. Part of the bump in Toronto seems to have to do with how SARs is defined.
What is the relationship between the progress of the disease in individuals and in populations? The evolution of the disease in individuals is the domain of medicine; the evolution of the disease in populations is the domain of public health. My guess is that the stubbornness of the virus in individuals is related to the potential for new outbreaks. In part I think this because of the importance of restricting exposure in controlling spread - in order to contain the disease, each victim must infect less than one other person.
When I wrote about SARS previously, it looked like the mortality rate was reasonably low. Turns out that the mortality rate is higher than previously thought it may be as high as 17 - 20%. As might be expected, mortality is higher in younger and older people. For people over 60, the mortality is quite high. I am quite interested in the age structure of the mortality - is it typical for infectious diseases or is it unusual?
As I noted earlier, SARS is a coronavirus and it does seem to have moved between animals and humans.
May 29, 2003
Too Many Knobs?
As I have written about models, I have talked about comparing models to observations of the processes that they are meant to represent. I talked briefly about model parameters and how they could be tuned to make an individual model better represent the data at hand. Each parameter is like a knob that can be turned to adjust the details of the given model.
For example let's consider the linear model illustrated below. In this model, factory output is assumed to be proportional to the input of labor. For a given increase in the hours worked, the factory produces a corresponding increase in output. The constant of proportionality (a, the slope of the line) and the amount of labor necessary to simply maintain the factory at zero output (the labor-axis intercept (b is actually the output-axis intercept)) may be different for different factories. A factory characterized by the red line will be more efficient and more productive than one characterized by the green line.
The dots in the figure are observations of the relationship between output and input from some factory. While the data are pretty linear, it is easy to imagine a line that would fit those data better than the ones I have drawn. The better fitting line would be described by adjusting the slope (a) downward and shifting the intercept to the right.
In the example above there are more data points than there are parameters and we may find that the best fitting line does not actually pass through any of the observations. The extent to which the data are close to the best fit model is a measure of how good the model fits the data; it can also be a measure of the certainty with which the model can be used to predict behavior in the future. The case where there are more data points than there are parameters is called over-determined; over-determined is good because it gives you a measure of certainty regarding the fit of the model to the data.
Now imagine we had only two data points. In that case there is exactly one line that fits the data exactly. This is ok, but we can fit those data equally well with any two-parameter model and we have no measure of certainty. It is generally true that you can exactly fit N data points with a model that has N parameters.
Good models have considerably fewer parameters than there are observations to constrain the model. As models become more complex they acquire more tunable parameters. In the case of GCMs there are many many tunable parameters, but there are also many many observations to constrain the models.
In using models to help us think about how Earth functions, we must trade off the simplicity of a model that significantly abstracts Earth systems by modeling them with a small number of easily understood parameters with the complexity of models that attempt to include more detail of Earth functioning but contain a larger number of parameters with more technical meanings and interconnections. Choices about this tradeoff will be different depending on our purpose. Climate modelers who use models to test their detailed understanding of Earth functioning will obviously choose to work with more complicated models. Decision makers who have have to include many factors beyond the climate in there work, may be better served by simpler models that capture the well understood behavior of the climate.
For example let's consider the linear model illustrated below. In this model, factory output is assumed to be proportional to the input of labor. For a given increase in the hours worked, the factory produces a corresponding increase in output. The constant of proportionality (a, the slope of the line) and the amount of labor necessary to simply maintain the factory at zero output (the labor-axis intercept (b is actually the output-axis intercept)) may be different for different factories. A factory characterized by the red line will be more efficient and more productive than one characterized by the green line.
The dots in the figure are observations of the relationship between output and input from some factory. While the data are pretty linear, it is easy to imagine a line that would fit those data better than the ones I have drawn. The better fitting line would be described by adjusting the slope (a) downward and shifting the intercept to the right.
In the example above there are more data points than there are parameters and we may find that the best fitting line does not actually pass through any of the observations. The extent to which the data are close to the best fit model is a measure of how good the model fits the data; it can also be a measure of the certainty with which the model can be used to predict behavior in the future. The case where there are more data points than there are parameters is called over-determined; over-determined is good because it gives you a measure of certainty regarding the fit of the model to the data.
Now imagine we had only two data points. In that case there is exactly one line that fits the data exactly. This is ok, but we can fit those data equally well with any two-parameter model and we have no measure of certainty. It is generally true that you can exactly fit N data points with a model that has N parameters.
Good models have considerably fewer parameters than there are observations to constrain the model. As models become more complex they acquire more tunable parameters. In the case of GCMs there are many many tunable parameters, but there are also many many observations to constrain the models.
In using models to help us think about how Earth functions, we must trade off the simplicity of a model that significantly abstracts Earth systems by modeling them with a small number of easily understood parameters with the complexity of models that attempt to include more detail of Earth functioning but contain a larger number of parameters with more technical meanings and interconnections. Choices about this tradeoff will be different depending on our purpose. Climate modelers who use models to test their detailed understanding of Earth functioning will obviously choose to work with more complicated models. Decision makers who have have to include many factors beyond the climate in there work, may be better served by simpler models that capture the well understood behavior of the climate.
May 28, 2003
Predicting the Future
Someone once said, "Predicting is hard, especially the future." That said, much of the rhetoric around relating science to policy making is based on the hope / belief that scientists and their models will be able to predict the future and it is the job of the policy maker to move society out of harm's way or to alter the future in beneficial ways. There are at least tow interesting things here: First, can scientists predict the future? and second, can policy makers alter the future?
Can scientists predict the future?
Consider climate: The US Global Climate Research Program (USGCRP) has invested more than a billion dollars in the development of General Circulation Models (GCMs).
Aside
GCMs are a class of computer models that attempt to simulate the circulation patterns of the oceans and the atmosphere. These models vary in how they represent the underlying physical processes. These variations reflect choices on the part of the modeling groups and in turn effect the kinds and details of the output produced by the model. Intercomparison of models and efforts to understand why model results differ is a major area of research.
End
GCMs are becoming more detailed with respect the processes that are included and with respect to the output that is produced. These models are often tested by hindcasting. In hindcasting, models are initialized with known conditions from some time in the past. The model output is then compared with observations from the subsequent climate history. The thinking is that if a model successfully "predicts" the past, it might be trusted to predict the future.
I said a while ago that weather prediction is not likely to go further out than it currently does (although I recently read an article suggested that understanding certain long wavelength waves in the atmosphere may extend certain kinds of weather prediction out beyond the current 5-7 days). So assuming we do trust GCMs to predict the future, the future of what? Well in the case of GCMs it would be climate with its inherently time and spatially averaged characteristics and related uncertainties. In general, the finer the spatial or temporal resolution we ask of a model the greater the uncertainty associated with the output.
Aside
Much of this line of thought is driven by conversations with Dan Sarewitz. In particular, beneath this description of models are some very fundamental questions having to to with the relationship between models and the systems that they represent.
End
Can policy makers alter the future?
(Yes)
Implicit in our faith in model-as-oracle is one of the following two conditions: either 1) the mdoel adequately represents all of the important processes; or 2) assumptions about external conditions remain true through out the prediction period.
I would argue that condition 1) is unlikely to ever be true with respect to cliamte models. This is because human behavior is a fundamental element of the climate systemand human behavior cannot be modeled on the scales of GCMs. Current climate models inlcude human behavior as an input primarily in the form of scenarios of expected GHG emissions and other forcing behavior.
So policy makers can alter the future by putting in place structures that alter the human forcing of climate. This would violate condition 2) in our current modeling infrastructure.
One more detail to be cleaned up
In my initial formulation I not only had policy makers altering the future, but also altering it for the better. This is a pretty serious caveat to have brushed over. To be true it too requires two things (all 3 "tos" in one sentence!): 1) we can successfully predict the outcomes of our policies; and 2) we can agree on what "beneficial" means. (Now you can see why I brushed over them.)
Aside
At any given time there is a closed (?) set of possible futures. Some of these futures are more likely than others. More likely futures are "closer" to our current trajectory than less likely futures. As our society evolves, the probability distribution of possible futures changes; some become impossible (avoided disasters / missed opportunities) other become more likely. I think that one of our guiding ideas should be to keep the space of possible futures as large as possible and the probability map skewed in ways that reflect as best as possible our collective vision of better world.
End
Someone once said, "Predicting is hard, especially the future." That said, much of the rhetoric around relating science to policy making is based on the hope / belief that scientists and their models will be able to predict the future and it is the job of the policy maker to move society out of harm's way or to alter the future in beneficial ways. There are at least tow interesting things here: First, can scientists predict the future? and second, can policy makers alter the future?
Can scientists predict the future?
Consider climate: The US Global Climate Research Program (USGCRP) has invested more than a billion dollars in the development of General Circulation Models (GCMs).
Aside
GCMs are a class of computer models that attempt to simulate the circulation patterns of the oceans and the atmosphere. These models vary in how they represent the underlying physical processes. These variations reflect choices on the part of the modeling groups and in turn effect the kinds and details of the output produced by the model. Intercomparison of models and efforts to understand why model results differ is a major area of research.
End
GCMs are becoming more detailed with respect the processes that are included and with respect to the output that is produced. These models are often tested by hindcasting. In hindcasting, models are initialized with known conditions from some time in the past. The model output is then compared with observations from the subsequent climate history. The thinking is that if a model successfully "predicts" the past, it might be trusted to predict the future.
I said a while ago that weather prediction is not likely to go further out than it currently does (although I recently read an article suggested that understanding certain long wavelength waves in the atmosphere may extend certain kinds of weather prediction out beyond the current 5-7 days). So assuming we do trust GCMs to predict the future, the future of what? Well in the case of GCMs it would be climate with its inherently time and spatially averaged characteristics and related uncertainties. In general, the finer the spatial or temporal resolution we ask of a model the greater the uncertainty associated with the output.
Aside
Much of this line of thought is driven by conversations with Dan Sarewitz. In particular, beneath this description of models are some very fundamental questions having to to with the relationship between models and the systems that they represent.
End
Can policy makers alter the future?
(Yes)
Implicit in our faith in model-as-oracle is one of the following two conditions: either 1) the mdoel adequately represents all of the important processes; or 2) assumptions about external conditions remain true through out the prediction period.
I would argue that condition 1) is unlikely to ever be true with respect to cliamte models. This is because human behavior is a fundamental element of the climate systemand human behavior cannot be modeled on the scales of GCMs. Current climate models inlcude human behavior as an input primarily in the form of scenarios of expected GHG emissions and other forcing behavior.
So policy makers can alter the future by putting in place structures that alter the human forcing of climate. This would violate condition 2) in our current modeling infrastructure.
One more detail to be cleaned up
In my initial formulation I not only had policy makers altering the future, but also altering it for the better. This is a pretty serious caveat to have brushed over. To be true it too requires two things (all 3 "tos" in one sentence!): 1) we can successfully predict the outcomes of our policies; and 2) we can agree on what "beneficial" means. (Now you can see why I brushed over them.)
Aside
At any given time there is a closed (?) set of possible futures. Some of these futures are more likely than others. More likely futures are "closer" to our current trajectory than less likely futures. As our society evolves, the probability distribution of possible futures changes; some become impossible (avoided disasters / missed opportunities) other become more likely. I think that one of our guiding ideas should be to keep the space of possible futures as large as possible and the probability map skewed in ways that reflect as best as possible our collective vision of better world.
End
May 24, 2003
Politics and Policy
Christie Todd-Whitman resigned last week from her position as Administrator of the Environmental Protection Agency. I was surprised she didn't resign the first time that the rug was pulled out from under her in the first weeks of her tenure. She was in the middle of a lot of scraps, but I always felt that she worked to advance her agencies missions by constructing the best policies possible based on what we know about interactions between humans and the environment. This often means moving away from a problem rather than solving it outright. She did some things I didn't agree with, but given the current administrations complete lack of understanding of the value of a healthy environment, I think she as good a job as possibly could have been done. I admire the fact that she didn't throw in the towel on many of the occasions when she was blind sided.
This is in the context of a distinction between means and ends. Whitman kept the common good that her agency was charged with in mind as she worked the politics of advancing her agency's mission. I thinks this sets her apart from much of the maneuvering that goes on in our government today. It is not clear that politics has not become the ends rather than the means.
Begin Aside
A lot of the following comes from conversations I have been having with David Gilbert-Keith.
End Aside
I read a more recent article by Lindblom this week. He continues to think that incrementalism is basically a good idea, but he is a bit more concerned about how common good is protected in the policy process. He sketches a scenario of tension among competing groups as a means to seek common ground and as a platform for developing policy. The problem comes when one group gains an overwhelming majority (the tyranny of the majority). I read an article in The New Yorker that follows this theme in the context of a discussion of Karl Rove; it seems that some of The Federalist Papers were also concerned with balancing the influence of "interests."
My question / concern is "How do we design policy processes that avoid tyrannies of interests?" As a member of an elite, it is not hard for me to entertain the value of a technocracy. As a liberal intellectual, I wonder how my far my ideas of societal good should be pressed in a highly heterogeneous society. We can no longer solve problems of difference by moving the frontier a little further west. We have reached the edges and are now filling in.
If we are going to be successful at managing Earth systems, then we will have to find ways to make trade-offs among competing interests. In building the necessary processes we will need to be careful the ends remain a healthy planet and that they don't contract to focus on strengthening the political power of "interests."
Christie Todd-Whitman resigned last week from her position as Administrator of the Environmental Protection Agency. I was surprised she didn't resign the first time that the rug was pulled out from under her in the first weeks of her tenure. She was in the middle of a lot of scraps, but I always felt that she worked to advance her agencies missions by constructing the best policies possible based on what we know about interactions between humans and the environment. This often means moving away from a problem rather than solving it outright. She did some things I didn't agree with, but given the current administrations complete lack of understanding of the value of a healthy environment, I think she as good a job as possibly could have been done. I admire the fact that she didn't throw in the towel on many of the occasions when she was blind sided.
This is in the context of a distinction between means and ends. Whitman kept the common good that her agency was charged with in mind as she worked the politics of advancing her agency's mission. I thinks this sets her apart from much of the maneuvering that goes on in our government today. It is not clear that politics has not become the ends rather than the means.
Begin Aside
A lot of the following comes from conversations I have been having with David Gilbert-Keith.
End Aside
I read a more recent article by Lindblom this week. He continues to think that incrementalism is basically a good idea, but he is a bit more concerned about how common good is protected in the policy process. He sketches a scenario of tension among competing groups as a means to seek common ground and as a platform for developing policy. The problem comes when one group gains an overwhelming majority (the tyranny of the majority). I read an article in The New Yorker that follows this theme in the context of a discussion of Karl Rove; it seems that some of The Federalist Papers were also concerned with balancing the influence of "interests."
My question / concern is "How do we design policy processes that avoid tyrannies of interests?" As a member of an elite, it is not hard for me to entertain the value of a technocracy. As a liberal intellectual, I wonder how my far my ideas of societal good should be pressed in a highly heterogeneous society. We can no longer solve problems of difference by moving the frontier a little further west. We have reached the edges and are now filling in.
If we are going to be successful at managing Earth systems, then we will have to find ways to make trade-offs among competing interests. In building the necessary processes we will need to be careful the ends remain a healthy planet and that they don't contract to focus on strengthening the political power of "interests."
May 22, 2003
Initial Conditions
Well the little go-round I just had with html and my browser is as good a place to start as any. Bear with me as I go into a bit of detail about how my browser (Internet Explorer, you might use Netscape or any one of a number of other options but I think they all do the same basic thing). The browser does a number of things to make pages load faster. One of the things it does is that the first time it downloads a picture, it makes a temporary copy of that picture and stores it deep in your hard drive. Everytime you look at a picture on a web page, your browser looks to see if it already has a copy of that picture. If it does then it uses the copy on your hard drive rather than downloading it again - this makes pages load faster. It keeps track of pictures by their names.
Now my problem is that I often diddle with pictures as I go along but leave their names the same. Many times I have beaten my head against the wall trying to figure out why the changes I have made in the picture don't show up on my web pages. The reason is that I change the original and I change the web page version, but I forget to tell my browser to update its copy so it continues to use the copy of the initial version that it has stored on my hard drive.
The point here is that the browser initializes itself and then proceeds happily and I forget that the browser has this initial condition and beat my head against the wall.
Initial conditions are just that - they are the place that a system or a process starts from. They are the parameters of a model at the point that the model begins to run. In a baseball game, the initial conditions include the number and skills of the players available to play each position, the starting pitcher, the batting order, the umpiring staff, the size of the crowd, the weather and the score. As the game progresses each of these parameters can change and the details of how the game develops will be influenced by those changes. Changes in some parameters are more influential than others (pitcher vs crowd size).
Initial conditions have to do with time and answer the question "What is the state of the system of interest at the beginning of the time period of interest?" In the baseball example above, the period of interest was implicitly a single game. If I were interested in the history of a team or league, the parameters I choose to follow and specify as a starting point would likely be different. They would certainly include the management, the home town, and the ball park.
Begin Aside
Exercise for the reader: In the context of a single game, is the ball park an initial condition?
End Aside
Thus one person's initial conditions are another's intermediate state. In the context of weather, today's weather is an intermediate value in yesterdays 5-day forecast, but is the initial condition for today's 5-day forecast. Similarly the political state of an institution or nation provides initial conditions for the development of policy solutions.
Consider the activities of the EPA. Christie Todd-Whitman's initial conditions included US participation in the Kyoto protocol and she advanced from that condition. Unfortunately rate of change of that parameter was very high and negative. Her initial conditions also included much stronger political pressure to roll back environmental regulations that she had championed as Governor of New Jersey. Todd-Whitman's resignation changes the state of the EPA and will be part of the initial conditions that her successor inherits.
Well the little go-round I just had with html and my browser is as good a place to start as any. Bear with me as I go into a bit of detail about how my browser (Internet Explorer, you might use Netscape or any one of a number of other options but I think they all do the same basic thing). The browser does a number of things to make pages load faster. One of the things it does is that the first time it downloads a picture, it makes a temporary copy of that picture and stores it deep in your hard drive. Everytime you look at a picture on a web page, your browser looks to see if it already has a copy of that picture. If it does then it uses the copy on your hard drive rather than downloading it again - this makes pages load faster. It keeps track of pictures by their names.
Now my problem is that I often diddle with pictures as I go along but leave their names the same. Many times I have beaten my head against the wall trying to figure out why the changes I have made in the picture don't show up on my web pages. The reason is that I change the original and I change the web page version, but I forget to tell my browser to update its copy so it continues to use the copy of the initial version that it has stored on my hard drive.
The point here is that the browser initializes itself and then proceeds happily and I forget that the browser has this initial condition and beat my head against the wall.
Initial conditions are just that - they are the place that a system or a process starts from. They are the parameters of a model at the point that the model begins to run. In a baseball game, the initial conditions include the number and skills of the players available to play each position, the starting pitcher, the batting order, the umpiring staff, the size of the crowd, the weather and the score. As the game progresses each of these parameters can change and the details of how the game develops will be influenced by those changes. Changes in some parameters are more influential than others (pitcher vs crowd size).
Initial conditions have to do with time and answer the question "What is the state of the system of interest at the beginning of the time period of interest?" In the baseball example above, the period of interest was implicitly a single game. If I were interested in the history of a team or league, the parameters I choose to follow and specify as a starting point would likely be different. They would certainly include the management, the home town, and the ball park.
Begin Aside
Exercise for the reader: In the context of a single game, is the ball park an initial condition?
End Aside
Thus one person's initial conditions are another's intermediate state. In the context of weather, today's weather is an intermediate value in yesterdays 5-day forecast, but is the initial condition for today's 5-day forecast. Similarly the political state of an institution or nation provides initial conditions for the development of policy solutions.
Consider the activities of the EPA. Christie Todd-Whitman's initial conditions included US participation in the Kyoto protocol and she advanced from that condition. Unfortunately rate of change of that parameter was very high and negative. Her initial conditions also included much stronger political pressure to roll back environmental regulations that she had championed as Governor of New Jersey. Todd-Whitman's resignation changes the state of the EPA and will be part of the initial conditions that her successor inherits.
Policy Process - slight return
The figure from yesterday is kind of a joke. The joke has to do with the reduction of policy to a single instruction - "weigh all factors". Compare the following figures from Morgan and Henrion (1990):
Less Real
More Real
Morgan, M.G., M. Henrion, and M. Small, Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis, 332 pp., Cambridge University Press, New York, 1990.
The figure from yesterday is kind of a joke. The joke has to do with the reduction of policy to a single instruction - "weigh all factors". Compare the following figures from Morgan and Henrion (1990):
Less Real
More Real
Morgan, M.G., M. Henrion, and M. Small, Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis, 332 pp., Cambridge University Press, New York, 1990.
May 21, 2003
Muddling Through
In 1959 Charles Lindblom published The science of "muddling through". It was destined to become a classic and muddling through is now a term of art in much of the public policy world. Lindblom's paper starts out with a sketch of what a rational process of policy making would look like. Part of the first step is to "list all related values in order of importance." This step is followed by a comprehensive analysis of all possible policy outcomes. With this thorough analysis in hand, the policy maker makes a choice that maximizes her values. Something like the following diagram (note especially the policy box toward the bottom of the diagram).
This process of policy making requires assembling and integrating tremendous volumes of data and knowledge Beyond the simplest, and not very interesting, policy problems Lindblom argues that this commonly articulated approach to policy making is not even possible. Herbert Simon's bounded rationality and his work with James March on the work of organizations point out that what in fact happens is that people use a limited amount of the information that they have available at any given time to actually make decisions.
In Muddling Through, Lindblom contrasts the fully rational approach with one in which the policy maker chooses one objective that is of primary importance and then, making choices from a small portfolio of policy approaches that she has experience, designs a next step in the evolution of the policy history of her agency. Lindblom argues that in making these choices the policy maker is choosing from differences at the margin. In this incremental approach to policy making, value tradeoffs and policy tradeoffs are intertwined.
Among the advantages of this approach is the fact that values do not have to be agreed upon among policy makers. "Agreement on policy thus becomes the only practicable test of the policy's correctness." The failure of a given analyst to consider all possible values is addressed by the fact that there is a portfolio of policy making agencies, each with its own primary values; interactions at the margins works to protect undue impacts of one policy on values that are not within its immediate scope. Muddling through recognizes that policy problems will never be solved comprehensively and thus policy solutions will advance toward better states by an ongoing process of iteration.
There is a notion of democratic principles in Lindblom's model. In particular by ensuring that values are advanced and prioritized through interactions among agencies with differing sets of priorities, he seems to assume that the portfolio of agencies in some ways reflects the societal parsing of problems and priorities.
The ideas of muddling through have been expanded upon by a school of environmental management called adaptive management. The primary difference is that in adaptive management, policies goals are made explicit and policies are treated as experiments. The experiment is successful if progress is made toward the goal; thus another explicit element of adaptive management is the inclusion of a measurement program in all policy regimes. It is this measurement program that allows statement to be made about whether a policy is successful or not.
Begin Aside
Of course it is a management axiom that "if you can't measure it, you can't manage it"
End Aside
So Lindblom sketches a policy landscape that moves incrementally and recursively from its current state to a new one according to a limited set of rules. Policy agents interact in a loosely couple ways that ensure that a wide range of values are represented and advanced in aggregate. In some ways this looks alot like the cellular automata models I was constructing of earthquake processes.
In 1959 Charles Lindblom published The science of "muddling through". It was destined to become a classic and muddling through is now a term of art in much of the public policy world. Lindblom's paper starts out with a sketch of what a rational process of policy making would look like. Part of the first step is to "list all related values in order of importance." This step is followed by a comprehensive analysis of all possible policy outcomes. With this thorough analysis in hand, the policy maker makes a choice that maximizes her values. Something like the following diagram (note especially the policy box toward the bottom of the diagram).
This process of policy making requires assembling and integrating tremendous volumes of data and knowledge Beyond the simplest, and not very interesting, policy problems Lindblom argues that this commonly articulated approach to policy making is not even possible. Herbert Simon's bounded rationality and his work with James March on the work of organizations point out that what in fact happens is that people use a limited amount of the information that they have available at any given time to actually make decisions.
In Muddling Through, Lindblom contrasts the fully rational approach with one in which the policy maker chooses one objective that is of primary importance and then, making choices from a small portfolio of policy approaches that she has experience, designs a next step in the evolution of the policy history of her agency. Lindblom argues that in making these choices the policy maker is choosing from differences at the margin. In this incremental approach to policy making, value tradeoffs and policy tradeoffs are intertwined.
Among the advantages of this approach is the fact that values do not have to be agreed upon among policy makers. "Agreement on policy thus becomes the only practicable test of the policy's correctness." The failure of a given analyst to consider all possible values is addressed by the fact that there is a portfolio of policy making agencies, each with its own primary values; interactions at the margins works to protect undue impacts of one policy on values that are not within its immediate scope. Muddling through recognizes that policy problems will never be solved comprehensively and thus policy solutions will advance toward better states by an ongoing process of iteration.
There is a notion of democratic principles in Lindblom's model. In particular by ensuring that values are advanced and prioritized through interactions among agencies with differing sets of priorities, he seems to assume that the portfolio of agencies in some ways reflects the societal parsing of problems and priorities.
The ideas of muddling through have been expanded upon by a school of environmental management called adaptive management. The primary difference is that in adaptive management, policies goals are made explicit and policies are treated as experiments. The experiment is successful if progress is made toward the goal; thus another explicit element of adaptive management is the inclusion of a measurement program in all policy regimes. It is this measurement program that allows statement to be made about whether a policy is successful or not.
Begin Aside
Of course it is a management axiom that "if you can't measure it, you can't manage it"
End Aside
So Lindblom sketches a policy landscape that moves incrementally and recursively from its current state to a new one according to a limited set of rules. Policy agents interact in a loosely couple ways that ensure that a wide range of values are represented and advanced in aggregate. In some ways this looks alot like the cellular automata models I was constructing of earthquake processes.
May 19, 2003
Weather vs. Climate
The difference between weather and climate is summed up in the following:
So let's deconstruct...
Expect vs Get
The statistical part of weather vs. climate is captured in this one. Simply put climate is the average weather over some period of time. I spent a string of summers at the far end of the Alaskan peninsula on a set of islands sticking out into the Pacific Ocean. There was a map of Alaskan weather that turned up on post cards and T-shirts; the weather in the Shumagins was characterized as Always Shitty (as opposed to Mostly Shitty, Occasionally Shitty etc in other parts of that great state). You get the picture. In the Shumagins we would watch the Weather Channel precursor called "Alaska Weather" it was a brilliant 1/2 hour weather forecast for the region and you could often see the low pressure systems lining up to the west along the Aleutians. The Lows would bring the weather, the lining up is the climate.
Wait vs. Move
This one captures another piece of the story. Different places have different climates, Everywhere has weather. Weather is variable on the time scale of days or weeks. At best climate is variable on time scales of years or decades. So if you don't like "Shitty" weather, it is best not to live in Alaska (if on the other hand a really good storm on a regular basis through out the summer is your cup of tea, then the Shumagins are just the place (I spent my summers there because my adviser was an Englishman (Welsh really) who like the northern bits of the UK and the Shumagins were a good subsitute and had more active tectonics (or so we thought...))). If you like a really good down-pour 'long about mid-afternoon everyday through out the summer and fall followed by a really steamy evening, then South Florida is just the place for you. Sometimes you get hurricanes in South Florida, sometimes you get thunderstorms in Alaska, but the expectation is different.
Begin Aside
So then what is all the fuss about "climate variability" that we have been making in the National Assessment of Climate Variability and Change. Well here is the rub, sometimes you don't get what you expect. We have warm winters, wet springs, dry summers, and cool falls. It all comes down to the base line. One major factor in the variability of climate is El Nino. Another may be that human activity is changing some of the fundamental dynamics that we have come to take as normal (At the 100,000yr time frame, you get climate variability on the scale of glaciers vs no glaciers). (If your mind is starting to bend that is ok) But as I love to say that is a future topic...
End Aside
The difference between weather and climate is summed up in the following:
Climate is what you expect, Weather is what you get
or
If you don't like the weather, wait a few days; if you don't like the climate, move
or
If you don't like the weather, wait a few days; if you don't like the climate, move
So let's deconstruct...
Expect vs Get
The statistical part of weather vs. climate is captured in this one. Simply put climate is the average weather over some period of time. I spent a string of summers at the far end of the Alaskan peninsula on a set of islands sticking out into the Pacific Ocean. There was a map of Alaskan weather that turned up on post cards and T-shirts; the weather in the Shumagins was characterized as Always Shitty (as opposed to Mostly Shitty, Occasionally Shitty etc in other parts of that great state). You get the picture. In the Shumagins we would watch the Weather Channel precursor called "Alaska Weather" it was a brilliant 1/2 hour weather forecast for the region and you could often see the low pressure systems lining up to the west along the Aleutians. The Lows would bring the weather, the lining up is the climate.
Wait vs. Move
This one captures another piece of the story. Different places have different climates, Everywhere has weather. Weather is variable on the time scale of days or weeks. At best climate is variable on time scales of years or decades. So if you don't like "Shitty" weather, it is best not to live in Alaska (if on the other hand a really good storm on a regular basis through out the summer is your cup of tea, then the Shumagins are just the place (I spent my summers there because my adviser was an Englishman (Welsh really) who like the northern bits of the UK and the Shumagins were a good subsitute and had more active tectonics (or so we thought...))). If you like a really good down-pour 'long about mid-afternoon everyday through out the summer and fall followed by a really steamy evening, then South Florida is just the place for you. Sometimes you get hurricanes in South Florida, sometimes you get thunderstorms in Alaska, but the expectation is different.
Begin Aside
So then what is all the fuss about "climate variability" that we have been making in the National Assessment of Climate Variability and Change. Well here is the rub, sometimes you don't get what you expect. We have warm winters, wet springs, dry summers, and cool falls. It all comes down to the base line. One major factor in the variability of climate is El Nino. Another may be that human activity is changing some of the fundamental dynamics that we have come to take as normal (At the 100,000yr time frame, you get climate variability on the scale of glaciers vs no glaciers). (If your mind is starting to bend that is ok) But as I love to say that is a future topic...
End Aside
May 18, 2003
This Weekend's Weather
Soccer this spring has hammered by the weather. It seems that at times in the year, spring being one of them, the weather can run in roughly 7 day cycles and this year the 1 or 2 days of that cycle that were rainy and raw tended to fall on Saturday. This weekend it looked to be another rain-out. The Weather Channel multi-day forecast called for rain on Friday tapering off into showers Saturday morning. This is a recipe for canceled games, because the fields don't drain so even if things are off by 1/2 of a day, the fields can still be rendered unusable.
But the rain never developed. In fact as the Friday got closer, the forecast changed from rain to showers, but continued to call for precipitation over the crucial field-soaking period. So Friday dawned, and sure enough the clouds came in, but noontime came and went without any rain, and then so did mid-afternoon. I went to bed with dry streets, peaked out the window at 3a (says something about my sleeping these days) and saw dry sidewalks. The sun rose behind a pretty raw overcast on Saturday, but everything was dry; so the Rockets got hammered in a 0-4 blowout (better game actually than the score).
The rain we were supposed to get was from a low that was to head northeast up the coast from the Carolinas. In this case there was also a high over New England that was to steer it out to sea. Based on watching my barometer, it looks like the high won and held the low further south than the forecasters expected. We have had seasonally pretty high pressure for the last few days.
Weather forecasting these days, at least in New York, is pretty good. The five day is not bad and by the time you get within 3 days it is pretty reliable in its main features (e.g rain vs no rain, warm vs cold). This is actually why the lack of rain this weekend impressed me. The forecast continued to call for showers long after they had failed to materialize as expected midday on Friday. The best I can do to explain it is the pressure story above and the observation that on the maps, New York was always on the northern edge of the region with predicted precipitation, so small changes in what actually happened could make a difference between getting rain and getting hammered.
In talking about the lack of rain with another soccer parent who spends a lot of time in England, she noted that in England nobody pays any attention to the weather forecast. Thus it would seem that the weather is more predictable on the East Coast of the US than it is in England (it is not that we are better at it, because the UK has one of the world's best Met Services). This is probably because the weather on the East Coast pretty much is all leftover from what was in Minnesota 24-36 hours earlier. Basically we can see it coming. England on the other hand is basically an island at the intersection of two or three serious oceans and its weather comes at it from all directions. (The geography of our weather also accounts for the pretty stable spring time period of 5-7 days.)
That said weather forecasting has still gotten pretty good. At least part of this skill is attributed to the fact that weather forecasters make literally thousands of predictions every year and with the success or failure of those predictions they learn a little bit more about how to make predictions. There are theoretical reasons why weather forecasting is not likely to go much further into the future than the 5-7 days that they now attempt. These reasons have to do with the complexity of weather systems and their sensitive dependence on initial conditions.
In fact, one of the founding scientists of the field of chaotic systems, Ed Lorenz, discovered sensitive dependence on initial conditions as he was developing the first very simple computer models of atmospheric dynamics. He was puzzled by the fact that if he restarted the model using output values of the model's parameters at some time, then the second run of the model would rapidly diverge from the first run. It turns out that in writing out the computer model's parameters, he was rounding them off and the round off mattered.
So weather predicting is probably at its limit as far as how far out it can go. Inside of the theoretical limits I expect we will get better at predicting the details of when the rain will start and stop and how much we will get etc. Next weekend looks like a soaker, but it is Memorial Day weekend so no soccer game. What the following weekend holds with respect to the weather is anyone's guess, but the twins have another commitment so the Rockets will be weakened up front and in the mid-field.
Soccer this spring has hammered by the weather. It seems that at times in the year, spring being one of them, the weather can run in roughly 7 day cycles and this year the 1 or 2 days of that cycle that were rainy and raw tended to fall on Saturday. This weekend it looked to be another rain-out. The Weather Channel multi-day forecast called for rain on Friday tapering off into showers Saturday morning. This is a recipe for canceled games, because the fields don't drain so even if things are off by 1/2 of a day, the fields can still be rendered unusable.
But the rain never developed. In fact as the Friday got closer, the forecast changed from rain to showers, but continued to call for precipitation over the crucial field-soaking period. So Friday dawned, and sure enough the clouds came in, but noontime came and went without any rain, and then so did mid-afternoon. I went to bed with dry streets, peaked out the window at 3a (says something about my sleeping these days) and saw dry sidewalks. The sun rose behind a pretty raw overcast on Saturday, but everything was dry; so the Rockets got hammered in a 0-4 blowout (better game actually than the score).
The rain we were supposed to get was from a low that was to head northeast up the coast from the Carolinas. In this case there was also a high over New England that was to steer it out to sea. Based on watching my barometer, it looks like the high won and held the low further south than the forecasters expected. We have had seasonally pretty high pressure for the last few days.
Weather forecasting these days, at least in New York, is pretty good. The five day is not bad and by the time you get within 3 days it is pretty reliable in its main features (e.g rain vs no rain, warm vs cold). This is actually why the lack of rain this weekend impressed me. The forecast continued to call for showers long after they had failed to materialize as expected midday on Friday. The best I can do to explain it is the pressure story above and the observation that on the maps, New York was always on the northern edge of the region with predicted precipitation, so small changes in what actually happened could make a difference between getting rain and getting hammered.
In talking about the lack of rain with another soccer parent who spends a lot of time in England, she noted that in England nobody pays any attention to the weather forecast. Thus it would seem that the weather is more predictable on the East Coast of the US than it is in England (it is not that we are better at it, because the UK has one of the world's best Met Services). This is probably because the weather on the East Coast pretty much is all leftover from what was in Minnesota 24-36 hours earlier. Basically we can see it coming. England on the other hand is basically an island at the intersection of two or three serious oceans and its weather comes at it from all directions. (The geography of our weather also accounts for the pretty stable spring time period of 5-7 days.)
That said weather forecasting has still gotten pretty good. At least part of this skill is attributed to the fact that weather forecasters make literally thousands of predictions every year and with the success or failure of those predictions they learn a little bit more about how to make predictions. There are theoretical reasons why weather forecasting is not likely to go much further into the future than the 5-7 days that they now attempt. These reasons have to do with the complexity of weather systems and their sensitive dependence on initial conditions.
In fact, one of the founding scientists of the field of chaotic systems, Ed Lorenz, discovered sensitive dependence on initial conditions as he was developing the first very simple computer models of atmospheric dynamics. He was puzzled by the fact that if he restarted the model using output values of the model's parameters at some time, then the second run of the model would rapidly diverge from the first run. It turns out that in writing out the computer model's parameters, he was rounding them off and the round off mattered.
So weather predicting is probably at its limit as far as how far out it can go. Inside of the theoretical limits I expect we will get better at predicting the details of when the rain will start and stop and how much we will get etc. Next weekend looks like a soaker, but it is Memorial Day weekend so no soccer game. What the following weekend holds with respect to the weather is anyone's guess, but the twins have another commitment so the Rockets will be weakened up front and in the mid-field.
May 15, 2003
Some thoughts on Determinism
At one time I spent a fair amount of time investigating the behavior of a certain kind of computer model. These models consisted of simple grids of squares. Each square had a value associated with it that would increase slowly. If at some time the value of two adjacent cells differed by more than some threshold, the values of all the cells would be redistributed until the entire system was once again below the threshold. It turns out that such a model can produce very sophisticated patterns of behavior (at the time, I was thinking about the sizes of earthquakes) if there is also a certain amount of randomness in the system (for example, error in how the system re-equilbrates when the threshold is crossed).
In the models that I was working with all of the values were integers (whole numbers like 1,2,3...). In doing the re-equilibrating values from a cell are distributed between the neighbors and thus there is a division. I would round off all of my division back to integers and it turns out that this rounding was enough randomness to drive the model into the region of interesting behavior.
So what does this have to do with determinism? The behavior that was interesting is commonly associated with complex systems, sometimes called the edge of chaos. This behavior is very difficult or impossible to predict; it has sensitive dependence on initial conditions. In my models (and many others) it was also completely deterministic. That is, from the moment I started the program running, the entire path of its evolution was determined (using integers made this transparent (well to me anyway)) even though the next step cannot be predicted.
This is interesting in the context of the machine / organism debate. Clearly the models I was working with were machine-like, while they were interesting, they were completely deterministic. On the other-hand, looking at them from the outside, you would not necessarily know that. Furthermore, computer models can be constructed that modify their internal workings and the relationships amoung their pieces as they attempt to achieve certain (externally set) goals. One class of these codes is called genetic algorithms.
It is also interesting in the context of the "what do we do now" question. Consider humans as part of the system. We can change how we interact with each other and with the natural systems. But given the difficulty in predicting or understanding the impact of any given change, how should we decide what to do? In many ways answering this question is the task that I have set myself. I have the following thoughts as a place to start:
At one time I spent a fair amount of time investigating the behavior of a certain kind of computer model. These models consisted of simple grids of squares. Each square had a value associated with it that would increase slowly. If at some time the value of two adjacent cells differed by more than some threshold, the values of all the cells would be redistributed until the entire system was once again below the threshold. It turns out that such a model can produce very sophisticated patterns of behavior (at the time, I was thinking about the sizes of earthquakes) if there is also a certain amount of randomness in the system (for example, error in how the system re-equilbrates when the threshold is crossed).
In the models that I was working with all of the values were integers (whole numbers like 1,2,3...). In doing the re-equilibrating values from a cell are distributed between the neighbors and thus there is a division. I would round off all of my division back to integers and it turns out that this rounding was enough randomness to drive the model into the region of interesting behavior.
So what does this have to do with determinism? The behavior that was interesting is commonly associated with complex systems, sometimes called the edge of chaos. This behavior is very difficult or impossible to predict; it has sensitive dependence on initial conditions. In my models (and many others) it was also completely deterministic. That is, from the moment I started the program running, the entire path of its evolution was determined (using integers made this transparent (well to me anyway)) even though the next step cannot be predicted.
This is interesting in the context of the machine / organism debate. Clearly the models I was working with were machine-like, while they were interesting, they were completely deterministic. On the other-hand, looking at them from the outside, you would not necessarily know that. Furthermore, computer models can be constructed that modify their internal workings and the relationships amoung their pieces as they attempt to achieve certain (externally set) goals. One class of these codes is called genetic algorithms.
It is also interesting in the context of the "what do we do now" question. Consider humans as part of the system. We can change how we interact with each other and with the natural systems. But given the difficulty in predicting or understanding the impact of any given change, how should we decide what to do? In many ways answering this question is the task that I have set myself. I have the following thoughts as a place to start:
- Incrementalism is probably a good default approach.
- Plans should include monitoring impacts and contingencies for when things go wrong.
- We must recognize that Earth systems are dynamic. To the extent that there is any balance in nature, that balance is likely due to tension rather than stasis.
May 14, 2003
Microwave Popcorn
Begin Aside
a bit of late night contemplation...
End Aside
Consider microwave popcorn:
$0.80 / bag (cheap, what is the margin on this stuff?)
the bag (let steam out, don't burn etc)
the popcorn (probably hybrid)
the grease / flavor (I don't really want to think about it)
not mention the ubiquitous oven itself
it is all technology...
Begin Aside
a bit of late night contemplation...
End Aside
Consider microwave popcorn:
$0.80 / bag (cheap, what is the margin on this stuff?)
the bag (let steam out, don't burn etc)
the popcorn (probably hybrid)
the grease / flavor (I don't really want to think about it)
not mention the ubiquitous oven itself
it is all technology...
The Idea of Wilderness
I am reading a book called The Idea of Wilderness by Max Oelschlaeger (1991, Yale Univ Press). Max spends a chapter on "Ancient Mediterranian Ideas", but I jumped straight from the paleo and neolithic to his discussion of the Modern. In making that jump it seems that one of his main points is that the earliest humans did not seperate themselves from the wild, but saw themselves as part of the natural cycles. While he doesn't say so explicitly, part of the cycle image is also a sense that our linear or forward notion of progress was absent from earliest human cultures.
Jumping to the Modern as I did I missed alot of transition but here is what I have gleaned about the difference (remember this is what I think Max thinks and I am still early on in the book...):
I am reading a book called The Idea of Wilderness by Max Oelschlaeger (1991, Yale Univ Press). Max spends a chapter on "Ancient Mediterranian Ideas", but I jumped straight from the paleo and neolithic to his discussion of the Modern. In making that jump it seems that one of his main points is that the earliest humans did not seperate themselves from the wild, but saw themselves as part of the natural cycles. While he doesn't say so explicitly, part of the cycle image is also a sense that our linear or forward notion of progress was absent from earliest human cultures.
Jumping to the Modern as I did I missed alot of transition but here is what I have gleaned about the difference (remember this is what I think Max thinks and I am still early on in the book...):
- Christianity has a dominate the Earth element to it. How that manifests has evolved but it has been an important part of the development of Western thinking about the relationship between humans and nature. In general though it requires that humans understand and dominate nature to know God or to return to a state of grace.
- Within the Modern there is an important split between nature-as-machine and nature-as-organism. This split is presented as an evolution from organismic to mechanistic and Oelschlaeger traces its origins back to Galileo. In part Galileo's use of the telescope introduced science as measurement and nature as the thing to be measured.
- Of course a crucial element of the Modern is that humans stand outside of Nature. Even the romantic poets stood outside. They longed to get back in and they looked to Modernity to return them to the Garden.
- The nature-as-machine metaphor is traced back to Descartes and his mind / body split.
- The finalization of the split between civilization and wilderness is laid at Adam Smith's feet.
Wealth of Nations represents the realization of Merlin's dream: the base and valueless could not, with the facility of natural science and industrial technology, be transformed into a Heaven on earth. Consumption, and its never-ending growth, is the summum bonum of the Wealth of Nations, an ideal yet living today in the relentless pursuit of economic development. Through legerdemain, Smith transformed the first world from which humankiind came into a standing reserve - a nature of significance only within a human matrix of judgment, devoid of intrinsic value. (p.94)
- The machine vs organism view continues to be important. In the machine causal relationships are linear and direct. In the organism they can be non-linear and complex.
May 13, 2003
Data, Results and Observations
Tonight's entry is an editted excerpt from a paper I wrote for my PhD thesis, but in the end it wasn't included there.
The terms, "data", "results" and "observations" are often used interchangeably. This reflects the fact that in the evolution of a field of study, the distinction between data, results and observations can become blurred. As work on a topic progresses, assumptions that were clearly stated in the beginning become taken for granted; they are no longer stated and exceptions to them are easily dismissed as mistakes. In an effort to see through the blurring, the following section presents some definitions ...
Begin Aside
The following are proposed definitions. They are how I would like to see the terms used not necessarily how they actually are used. In particular, computer modelers often talk about the data that their models produce.
End Aside
Data
Data are things that are accepted as being facts; this is analogous to being accepted as independent of any model. It is this characteristic which makes data useful in constraining models. Unfortunately, as noted yesterday, it is not possible for anything to be completely model independent; the very act of perception involves assumptions about what is likely to be seen. To accommodate this fact, data can be defined as things whose model we are willing to ignore or at least to accept without question. What is and is not data is a decision that is made in the context of the problem that is being considered. An example of something that I would consider data are the measurements of CO2 concentration in the Keeling curve.
Results
Results are the output of some operation on data; results cannot be model free. Any sort of manipulation of data is done with some purpose and that purpose is determined by the choice of model. In the case of data reduction (e.g., averaging several measurements) the model is usually so widely accepted that the results are once again considered data. The fact remains that even averaging assumes something about the nature of the process which is being measured. An example of results would the the average temperature of Earth.
Observations
Observations are generalizations from data or results and, like data, they are meant to be model free. Also like data, it is not possible for them to be so. Observations are the most pernicious of model hiders. It is in generalizing or expounding upon data or results that our preconceived notions are the most invisible. An example of an observation would be to note that both CO2 concentration and the average temperature of Earth are increasing.
End Excerpt
So there you have it - some definitions. In the remainder of the paper that those come from I worked very hard to be consistent with how I used the terms. The problem is that the rest of the world hasn't read my paper and doesn't maintain the same vigilence in how they use these terms. I think what I am hovering around here is that even as we investigate how the world works, we make assumptions about how it works. Those assumptions influence what we find etc.
In reaction to my Eqn 0 piece last night a reader pointed out that politics can also have a strong influence on what is model and what is data. (Is that a fair paraphrase David?) No question about that and that is why this whole problem is so insidious. Politics are buried even deeper than assumptions about linearity or homogeniety. Obviously there is much more to say on this ...
Tonight's entry is an editted excerpt from a paper I wrote for my PhD thesis, but in the end it wasn't included there.
The terms, "data", "results" and "observations" are often used interchangeably. This reflects the fact that in the evolution of a field of study, the distinction between data, results and observations can become blurred. As work on a topic progresses, assumptions that were clearly stated in the beginning become taken for granted; they are no longer stated and exceptions to them are easily dismissed as mistakes. In an effort to see through the blurring, the following section presents some definitions ...
Begin Aside
The following are proposed definitions. They are how I would like to see the terms used not necessarily how they actually are used. In particular, computer modelers often talk about the data that their models produce.
End Aside
Data
Data are things that are accepted as being facts; this is analogous to being accepted as independent of any model. It is this characteristic which makes data useful in constraining models. Unfortunately, as noted yesterday, it is not possible for anything to be completely model independent; the very act of perception involves assumptions about what is likely to be seen. To accommodate this fact, data can be defined as things whose model we are willing to ignore or at least to accept without question. What is and is not data is a decision that is made in the context of the problem that is being considered. An example of something that I would consider data are the measurements of CO2 concentration in the Keeling curve.
Results
Results are the output of some operation on data; results cannot be model free. Any sort of manipulation of data is done with some purpose and that purpose is determined by the choice of model. In the case of data reduction (e.g., averaging several measurements) the model is usually so widely accepted that the results are once again considered data. The fact remains that even averaging assumes something about the nature of the process which is being measured. An example of results would the the average temperature of Earth.
Observations
Observations are generalizations from data or results and, like data, they are meant to be model free. Also like data, it is not possible for them to be so. Observations are the most pernicious of model hiders. It is in generalizing or expounding upon data or results that our preconceived notions are the most invisible. An example of an observation would be to note that both CO2 concentration and the average temperature of Earth are increasing.
End Excerpt
So there you have it - some definitions. In the remainder of the paper that those come from I worked very hard to be consistent with how I used the terms. The problem is that the rest of the world hasn't read my paper and doesn't maintain the same vigilence in how they use these terms. I think what I am hovering around here is that even as we investigate how the world works, we make assumptions about how it works. Those assumptions influence what we find etc.
In reaction to my Eqn 0 piece last night a reader pointed out that politics can also have a strong influence on what is model and what is data. (Is that a fair paraphrase David?) No question about that and that is why this whole problem is so insidious. Politics are buried even deeper than assumptions about linearity or homogeniety. Obviously there is much more to say on this ...
May 12, 2003
Equation 0
My favorite equation is the following:
When it was first presented to me it was in the form:
In the 0' form it is a way to compare your understanding of a problem (model) with the way that the world actually works (data). Another way of thinking about eqn 0' is that the model is the part of the world that you understand and the residual is the part of the world that remains to be explained. Your model of the world is acceptable to the extent that your residual is acceptable. In general, acceptable resduals are very much like low volume white noise (they have no structure and low amplitude).
Begin Aside
Notice that I have not said that the model is true or false, it is only accepable or unacceptable.
End Aside
If your model is not acceptable there are two things that can be done. The first is to refine the parameters of the existing model. Lets say that our model of how much CO2 a car puts into the atmosphere is a linear function of how many miles it is driven. We might write that down as follows:
a and b are the parameters of the model. a is the slope of the line and b is the "0 intercept". The values of a and b are choices we make and can be adjusted based on the make and model of the particular car. Cars with better gas mileage will have a lower values of a. The intercept value, b, will be very close to 0 and will vary with the driver of the car; in my quick thinking tonight it might reflect the time that a driver allows her car to warm up before starting off.
The second option if your residuals are not acceptable is to change models. In the context of the example above perhaps the amount of CO2 emitted by a car is some more complicated function of its average speed:
In this case c is our adjustable parameter but we have also introduced a non-linear element (the square root) and an aggregate factor (average speed). (I am not going to into this further tonight, the important point is that there are alternate possiblities for our explanations of how the world works).
OK that is all fine and good, but what does it have to do with my preferred formulation of this equation, Eqn 0? Well my preferred form suggests that the data we actually collect reflects what we expect to find plus some surprises. This is a variation on Kuhn's ideas of a paradigm and pardigm shifts. In times of normal science, experiments are designed to explore the details (refine the values of a and b in Eqn 1) of the paradigm (model); we only look for what we expect to find. In times of pardigm shift, the surprise part cannot be ignored and we must replace our models (Eqn 1 vs Eqn 2).
The key issue here is that Eqn 0 and Eqn 0' are the same equation. Each form has surprise in it and models and data are acceptable to the extent that our level of surprise remains acceptable.
Begin Aside
Notice that I have not said that the model is true or false, it is only accepable or unacceptable.
End Aside
My favorite equation is the following:
data = model + residual (Eqn 0)
When it was first presented to me it was in the form:
residual = data - model (Eqn 0')
In the 0' form it is a way to compare your understanding of a problem (model) with the way that the world actually works (data). Another way of thinking about eqn 0' is that the model is the part of the world that you understand and the residual is the part of the world that remains to be explained. Your model of the world is acceptable to the extent that your residual is acceptable. In general, acceptable resduals are very much like low volume white noise (they have no structure and low amplitude).
Begin Aside
Notice that I have not said that the model is true or false, it is only accepable or unacceptable.
End Aside
If your model is not acceptable there are two things that can be done. The first is to refine the parameters of the existing model. Lets say that our model of how much CO2 a car puts into the atmosphere is a linear function of how many miles it is driven. We might write that down as follows:
CO2 = a * miles + b (Eqn 1)
a and b are the parameters of the model. a is the slope of the line and b is the "0 intercept". The values of a and b are choices we make and can be adjusted based on the make and model of the particular car. Cars with better gas mileage will have a lower values of a. The intercept value, b, will be very close to 0 and will vary with the driver of the car; in my quick thinking tonight it might reflect the time that a driver allows her car to warm up before starting off.
The second option if your residuals are not acceptable is to change models. In the context of the example above perhaps the amount of CO2 emitted by a car is some more complicated function of its average speed:
CO2 = c * sqrt(avg speed) (Eqn 2)
In this case c is our adjustable parameter but we have also introduced a non-linear element (the square root) and an aggregate factor (average speed). (I am not going to into this further tonight, the important point is that there are alternate possiblities for our explanations of how the world works).
OK that is all fine and good, but what does it have to do with my preferred formulation of this equation, Eqn 0? Well my preferred form suggests that the data we actually collect reflects what we expect to find plus some surprises. This is a variation on Kuhn's ideas of a paradigm and pardigm shifts. In times of normal science, experiments are designed to explore the details (refine the values of a and b in Eqn 1) of the paradigm (model); we only look for what we expect to find. In times of pardigm shift, the surprise part cannot be ignored and we must replace our models (Eqn 1 vs Eqn 2).
The key issue here is that Eqn 0 and Eqn 0' are the same equation. Each form has surprise in it and models and data are acceptable to the extent that our level of surprise remains acceptable.
Begin Aside
Notice that I have not said that the model is true or false, it is only accepable or unacceptable.
End Aside
May 11, 2003
SARS slight return
Aside
I am getting some great feedback on the modeling stuff, but I am feeling a bit frazzled tonight so I am going to shift gears briefly and let my thoughts on modeling simmer for a while.
End
SARS is back in the news. It seems like the disease has the potential for a bit of a second wind. The figure below summarizes my understanding of the disease. In individuals, there were reports of recurrence if the disease was not fully conquered in the first go-round. In communities, there seems to be the potential for new outbreaks as well. Toronto is seeing a pretty good resurgence. Part of the bump in Toronto seems to have to do with how SARs is defined.
What is the relationship between the progress of the disease in individuals and in populations? The evolution of the disease in individuals is the domain of medicine; the evolution of the disease in populations is the domain of public health. My guess is that the stubbornness of the virus in individuals is related to the potential for new outbreaks. In part I think this because of the importance of restricting exposure in controlling spread - in order to contain the disease, each victim must infect less than one other person.
When I wrote about SARS previously, it looked like the mortality rate was reasonably low. Turns out that the mortality rate is higher than previously thought it may be as high as 17 - 20%. As might be expected, mortality is higher in younger and older people. For people over 60, the mortality is quite high. I am quite interested in the age structure of the mortality - is it typical for infectious diseases or is it unusual?
As I noted earlier, SARS is a coronavirus and it does seem to have moved between animals and humans.
I am getting some great feedback on the modeling stuff, but I am feeling a bit frazzled tonight so I am going to shift gears briefly and let my thoughts on modeling simmer for a while.
End
SARS is back in the news. It seems like the disease has the potential for a bit of a second wind. The figure below summarizes my understanding of the disease. In individuals, there were reports of recurrence if the disease was not fully conquered in the first go-round. In communities, there seems to be the potential for new outbreaks as well. Toronto is seeing a pretty good resurgence. Part of the bump in Toronto seems to have to do with how SARs is defined.
What is the relationship between the progress of the disease in individuals and in populations? The evolution of the disease in individuals is the domain of medicine; the evolution of the disease in populations is the domain of public health. My guess is that the stubbornness of the virus in individuals is related to the potential for new outbreaks. In part I think this because of the importance of restricting exposure in controlling spread - in order to contain the disease, each victim must infect less than one other person.
When I wrote about SARS previously, it looked like the mortality rate was reasonably low. Turns out that the mortality rate is higher than previously thought it may be as high as 17 - 20%. As might be expected, mortality is higher in younger and older people. For people over 60, the mortality is quite high. I am quite interested in the age structure of the mortality - is it typical for infectious diseases or is it unusual?
As I noted earlier, SARS is a coronavirus and it does seem to have moved between animals and humans.
May 09, 2003
A course I'd like to teach
Tonight I am going to do something a little different. Rather than a mini-essay, I am going to present the syllabus for a course that I think would be fun to teach someday. In keeping with the spirit of the blog, it is a first pass not a finished product.
Introduction to the Science of the Environment
I. Introduction to Scientific Thinking (2 weeks)
II. Cycles and Systems (4 weeks)
III. Models (1 week)
Review and Mid-term exam
IV. Introduction to Ecosystems (2 weeks)
V. Introduction to Public Health (3 Weeks)
Begin Aside
This is a topic that could use some more thought. Clearly it is the one I am least familiar with.
End Aside
VI. Wrap up / Tie it all together (1 week)
Final Paper - Write an intellectual history of a major environmental concept (chosen from a list e.g. Sanitation, Clean Air, Climate Change etc.)
Tonight I am going to do something a little different. Rather than a mini-essay, I am going to present the syllabus for a course that I think would be fun to teach someday. In keeping with the spirit of the blog, it is a first pass not a finished product.
Introduction to the Science of the Environment
I. Introduction to Scientific Thinking (2 weeks)
- Brief history of Cosmology
- The Calculus
- Kuhn, Popper, Feyerbend (or why the extreme postmodern doesn't matter)
- Reductionism and Determinism
II. Cycles and Systems (4 weeks)
- Rocks (how old is Earth and how do we know?)
- Hydrologic Cycle
- Ocean and Atmospheric Circulation
- Carbon & Nitrogen Cycles
- Industrial Ecology
III. Models (1 week)
- Conceptual models
- Analytic models
- Comuter models
- Data, Models and Results (chickens and eggs)
Review and Mid-term exam
IV. Introduction to Ecosystems (2 weeks)
- Trophic structures
- The work of the Odum Brothers (energy flows through natural systems)
- Types and Distributions
- Urban Metabolism and Ecologic Footprints
V. Introduction to Public Health (3 Weeks)
- Aggregate vs Individual health
- Toxicology
- Epidemiology
Begin Aside
This is a topic that could use some more thought. Clearly it is the one I am least familiar with.
End Aside
VI. Wrap up / Tie it all together (1 week)
Final Paper - Write an intellectual history of a major environmental concept (chosen from a list e.g. Sanitation, Clean Air, Climate Change etc.)
May 08, 2003
Systems, Systems, Systems
In my "Name Change" entry a few days ago I wrote about singular vs plural Earth systems. But what is all this noise about systems? What makes "systems thinking" so important?
Systems thinking is usually contrasted with reductionism. Reductionism is the mode of inquiry that has taught us most of what we know about how Earth works. A reductionist approach to a problem considers the problem in isolation from all other problems and influences. It takes the problem apart bit by bit and then takes the bits apart (turtles the rest of the way down). At the heart of a reductionist approach to learning is the assumption that if all of the bits are understood, then the whole will also be understood. Just as it isolates the problem, the reductionist approach isolates the understanding of each of the component bits from each other (OK - I am bashing a bit here, but as I noted early on, I do that sometimes). Reductionism works well to the extent that problems and bits are actually fairly isolated from each other and that understanding of the bits in isolation is neatly related to the understanding of the bit when it is put back into the whole.
However when bits interact with other bits in non-linear (unexpected / interesting) ways in the context of the whole, then reductionism does not work as well as a mode of inquiry because understanding the bits in isolation only tells you about its behavior in isolation. It doesn't tell you the complete story of how the bit contributes to the function of the complete (here it comes...) system.
So systems thinking is an effort to understand problems in their entirety complete with all of the messy interactions among bits. I sometimes think of the systems approach being one which considers a problem from the perspective of black boxes. Where in a reductionist approach one would relentlessly deconstruct the boxes, a systems approach focuses on the function of the box. How does the box transform a given input into an output? Where does the box get its inputs? Where does a box send its output? The detail of how the box transforms an input into an output is ignored in favor of understanding the interconnection of black boxes and the transformation of signals as they move through the system.
Now clearly systems can have subsystems. (I really must write get the hierarchy thread started soon!) And deconstructing a system into subsystems definitely has a reductionist quality to it. I tend to think that one of the fundamental differences is that reductionism has an inherent assumption of linearity underlying it. In a reductionist frame, the bits have to go back together again in such a way that what you learned in isolation is still the dominant thing to be known, this seems to obviate any cross terms or feedback relationships. In a systems approach it is not assumed that the role of any box can be understood completely on its own. It might be useful to send signals through a box in isolation in order to understand its transforming properties, but by keeping track of and focusing on connections in a systems approach, the "putting back together" problem is always under consideration.
Begin Aside
If any one is actually reading this, I would love some feedback on my equating reductionism with linearity!
End Aside
So wrapping up for tonight - Reductionism has taught us a lot about how the world works and it will teach us more. But by adding a strong measure of systems work to our knowledge producing endeavors, we can gain an richer understanding of problems by focusing on interactions among subjects of interest.
In my "Name Change" entry a few days ago I wrote about singular vs plural Earth systems. But what is all this noise about systems? What makes "systems thinking" so important?
Systems thinking is usually contrasted with reductionism. Reductionism is the mode of inquiry that has taught us most of what we know about how Earth works. A reductionist approach to a problem considers the problem in isolation from all other problems and influences. It takes the problem apart bit by bit and then takes the bits apart (turtles the rest of the way down). At the heart of a reductionist approach to learning is the assumption that if all of the bits are understood, then the whole will also be understood. Just as it isolates the problem, the reductionist approach isolates the understanding of each of the component bits from each other (OK - I am bashing a bit here, but as I noted early on, I do that sometimes). Reductionism works well to the extent that problems and bits are actually fairly isolated from each other and that understanding of the bits in isolation is neatly related to the understanding of the bit when it is put back into the whole.
However when bits interact with other bits in non-linear (unexpected / interesting) ways in the context of the whole, then reductionism does not work as well as a mode of inquiry because understanding the bits in isolation only tells you about its behavior in isolation. It doesn't tell you the complete story of how the bit contributes to the function of the complete (here it comes...) system.
So systems thinking is an effort to understand problems in their entirety complete with all of the messy interactions among bits. I sometimes think of the systems approach being one which considers a problem from the perspective of black boxes. Where in a reductionist approach one would relentlessly deconstruct the boxes, a systems approach focuses on the function of the box. How does the box transform a given input into an output? Where does the box get its inputs? Where does a box send its output? The detail of how the box transforms an input into an output is ignored in favor of understanding the interconnection of black boxes and the transformation of signals as they move through the system.
Now clearly systems can have subsystems. (I really must write get the hierarchy thread started soon!) And deconstructing a system into subsystems definitely has a reductionist quality to it. I tend to think that one of the fundamental differences is that reductionism has an inherent assumption of linearity underlying it. In a reductionist frame, the bits have to go back together again in such a way that what you learned in isolation is still the dominant thing to be known, this seems to obviate any cross terms or feedback relationships. In a systems approach it is not assumed that the role of any box can be understood completely on its own. It might be useful to send signals through a box in isolation in order to understand its transforming properties, but by keeping track of and focusing on connections in a systems approach, the "putting back together" problem is always under consideration.
Begin Aside
If any one is actually reading this, I would love some feedback on my equating reductionism with linearity!
End Aside
So wrapping up for tonight - Reductionism has taught us a lot about how the world works and it will teach us more. But by adding a strong measure of systems work to our knowledge producing endeavors, we can gain an richer understanding of problems by focusing on interactions among subjects of interest.
May 07, 2003
The Grey Area
It is remarkable that I have gotten this far without introducing the following diagram. I call it a disciplinary map and use it to help me think about how to design interdisciplinary teams to address Earth systems problems.
Earth Systems Disciplinary Map
It is remarkable that I have gotten this far without introducing the following diagram. I call it a disciplinary map and use it to help me think about how to design interdisciplinary teams to address Earth systems problems.
Earth Systems Disciplinary Map
The traditional disciplinary organization of the university is represented by the lobes in the figure. My own discipline, geophysics, maps into the blue lobe along with the other physical sciences such as physics and chemistry. The magenta lobe represents the life science which primarily are biology and medicine. These two lobes together make up what is traditionally thought of as the Natural sciences. The yellow lobe labeled "Human Processes" is the region of Simon's artificial and contains the social sciences, engineering and the humanities. At the intersections of the main lobes are regions that represent interdisciplinary enterprises. Of particular notice is that I map ecology into the purple region between physical and biological processes.
Begin Aside
Exercise for the reader: Where does mathematics map?
End Aside
An example of how problems might progress through the diagram as we learn more about how Earth works is provided by climate change. The problem of increasing CO2 concentration in the atmosphere was first noticed by atmospheric chemists (blue lobe). Understanding the resulting greenhouse warming requires understanding both the CO2 chemistry of the atmosphere and the human processes associated with fossil fuel burning (yellow lobe) and thus maps into the green region. Continuing on with the problem to understand the resulting impacts and the carbon system as a whole requires that we also understand the biological systems that sequester and otherwise move carbon around; this is an Earth systems problem and maps into the grey area.
As you move from the edges of the diagram to the center, interactions among process studied by different disciplines become increasingly important. As we progress we need to work in all regions of the diagram, but as human impacts on the functioning of Earth systems become increasingly important, we must pay increasing attention to the grey areas.
May 05, 2003
Natural and Artificial
The Natural and The Artificial
Herbert Simon makes a distinction between the Natural and the Artificial that I have found useful over the years. To first cut, the Natural is everything that can be separated from human intentionality and the Artificial is that which reflects human intentionality. In more detail:
Artificial things are synthesized by humans
Artificial things may imitate appearances of natural things
Artificial things can be characterized in terms of functions, goals, adaptation
Artificial things are often discussed, particularly when they are being designed, in terms of imperatives as well as descriptives
Natural things are everything that is leftover
Over the last several decades the Artificial has encroached steadily on the Natural. It is sometimes stated that there are no ecosystems on Earth that are untouched by Human influence (indeed to the extent that those systems have plants that have physiological response to the concentration of CO2 in the atmosphere, this statement must be true). But untouched does not imply that the systems themselves are now artificial; the details of their function may be changed, but it is still independent of human intentionality. That distinction and argument is fairly straight forward in the Arctic, but what about in Manhattan?
As I have thought about the urban environmental systems, I have begun to question the usefulness of Simon's distinction in such highly engineered environments. I can not quite convince myself that the Natural has been completely swamped (after all the winds still obey the laws of fluid dynamics), but at some level it seems that the distinction has been thoroughly blurred.
Consider the trophic structure of a city. To what extent does it reflect human intentionality? The biodiversity is extremely low with a small number of very hearty species dominating the biomass. Those hearty species depend in large part on human refuse for their food. Thus we might argue that while very large parts of the urban ecosystem imitate the appearance of natural ecosystems (albeit sick ones perhaps), they are in fact synthesized by humans. The ecosystem of a city is in large part the a by product of the rest of the city's designed elements. The trophic structure of the NYC subway is an unintended consequence of the need to move a remarkably large number of people around a very small and congested environment; it trophic structure in that environment natural or artificial? In terms of imperatives, we try to destroy it constantly.
There has been a lot of discussion about whether humans are parts of the natural system. To my mind that is a question that does not have a useful answer. Humans are clearly a part of the singular Earth system, but we are a special part. We have intentionality that is of particular importance to us. So where there was a time during which humans might consider themselves apart from nature, we are now increasingly intertwined with nature despite (or perhaps because) of our efforts to isolate ourselves from the vagaries of Earth's natural systems.
Wrapping up, I think that it is important that we carry the notion of a distinction between that which reflects human intentionality and that which is independent of it, but that we also recognize that that distinction maybe coming increasingly difficult to discern. This blurring increases the importance of developing decision-making processes that recognize the potential for far reaching human impacts on Earth's functioning.
Herbert Simon makes a distinction between the Natural and the Artificial that I have found useful over the years. To first cut, the Natural is everything that can be separated from human intentionality and the Artificial is that which reflects human intentionality. In more detail:
Over the last several decades the Artificial has encroached steadily on the Natural. It is sometimes stated that there are no ecosystems on Earth that are untouched by Human influence (indeed to the extent that those systems have plants that have physiological response to the concentration of CO2 in the atmosphere, this statement must be true). But untouched does not imply that the systems themselves are now artificial; the details of their function may be changed, but it is still independent of human intentionality. That distinction and argument is fairly straight forward in the Arctic, but what about in Manhattan?
As I have thought about the urban environmental systems, I have begun to question the usefulness of Simon's distinction in such highly engineered environments. I can not quite convince myself that the Natural has been completely swamped (after all the winds still obey the laws of fluid dynamics), but at some level it seems that the distinction has been thoroughly blurred.
Consider the trophic structure of a city. To what extent does it reflect human intentionality? The biodiversity is extremely low with a small number of very hearty species dominating the biomass. Those hearty species depend in large part on human refuse for their food. Thus we might argue that while very large parts of the urban ecosystem imitate the appearance of natural ecosystems (albeit sick ones perhaps), they are in fact synthesized by humans. The ecosystem of a city is in large part the a by product of the rest of the city's designed elements. The trophic structure of the NYC subway is an unintended consequence of the need to move a remarkably large number of people around a very small and congested environment; it trophic structure in that environment natural or artificial? In terms of imperatives, we try to destroy it constantly.
There has been a lot of discussion about whether humans are parts of the natural system. To my mind that is a question that does not have a useful answer. Humans are clearly a part of the singular Earth system, but we are a special part. We have intentionality that is of particular importance to us. So where there was a time during which humans might consider themselves apart from nature, we are now increasingly intertwined with nature despite (or perhaps because) of our efforts to isolate ourselves from the vagaries of Earth's natural systems.
Wrapping up, I think that it is important that we carry the notion of a distinction between that which reflects human intentionality and that which is independent of it, but that we also recognize that that distinction maybe coming increasingly difficult to discern. This blurring increases the importance of developing decision-making processes that recognize the potential for far reaching human impacts on Earth's functioning.
May 03, 2003
Name Change
I change the name of my blog today from Planetary Management to Earth Systems Management. The change reflects a couple of things. First I did a Google search on "planetary management" and got a lot of stuff that was oriented to imaginary planets rather than to the very real one we currently (and for the foreseeable future) live on. I want to steer clear of fictional enterprises because I believe that we really are making management decisions (through action or inaction) at the scale of Earth and I do not want my ideas to be confused with fiction (despite that fact that I have on occaision been known to make things up).
The second reason that I changed the name is that I wanted to get a plural noun into the title. The expansion of the number of disciplines studying Earth's functioning clearly indicates that there is more than one system at work on our planet. The disciplinary focus has taught us much about those systems. It is now time to work on understanding the interaction of those systems.
In particular our management decisions are likely to reflect the mode of our inquiry. For example fisheries managment decisions in the mid-20th Century focused only on the lifecycle of individual species primarily because that is what fishery biologists were interested in. We now know that environmental factors such as El Nino variations and inter-species interactions such as trophic structures are also important in fish stock dynamics and our management strategies are beginning to reflect this.
Begin Aside
Imagine a surface of constant but very low nitrogen concentration. If that concentration is low enough, most of Earth's atmosphere would be inside of it. For all intents and purposes, that surface would define a closed system with respect to mass (that is all of Earth's stuff would be inside of that surface (this ignores some gasses going up and some rocks coming down). Energy would cross that boundary and in the long run the balance of the energy fluxes that cross that surface are is all that we have to go on.
End Aside
I mentioned that I wanted to get a plural noun into the title to call attention to the fact that there are multiple systems to be considered. But as I have also alluded to, at some large scale there is a single Earth System. How we parse that singular system is crucial to how we will understand it and how we will make decisions that affect the evolution of the singular and plural systems.
I change the name of my blog today from Planetary Management to Earth Systems Management. The change reflects a couple of things. First I did a Google search on "planetary management" and got a lot of stuff that was oriented to imaginary planets rather than to the very real one we currently (and for the foreseeable future) live on. I want to steer clear of fictional enterprises because I believe that we really are making management decisions (through action or inaction) at the scale of Earth and I do not want my ideas to be confused with fiction (despite that fact that I have on occaision been known to make things up).
The second reason that I changed the name is that I wanted to get a plural noun into the title. The expansion of the number of disciplines studying Earth's functioning clearly indicates that there is more than one system at work on our planet. The disciplinary focus has taught us much about those systems. It is now time to work on understanding the interaction of those systems.
In particular our management decisions are likely to reflect the mode of our inquiry. For example fisheries managment decisions in the mid-20th Century focused only on the lifecycle of individual species primarily because that is what fishery biologists were interested in. We now know that environmental factors such as El Nino variations and inter-species interactions such as trophic structures are also important in fish stock dynamics and our management strategies are beginning to reflect this.
Begin Aside
Imagine a surface of constant but very low nitrogen concentration. If that concentration is low enough, most of Earth's atmosphere would be inside of it. For all intents and purposes, that surface would define a closed system with respect to mass (that is all of Earth's stuff would be inside of that surface (this ignores some gasses going up and some rocks coming down). Energy would cross that boundary and in the long run the balance of the energy fluxes that cross that surface are is all that we have to go on.
End Aside
I mentioned that I wanted to get a plural noun into the title to call attention to the fact that there are multiple systems to be considered. But as I have also alluded to, at some large scale there is a single Earth System. How we parse that singular system is crucial to how we will understand it and how we will make decisions that affect the evolution of the singular and plural systems.
Subscribe to:
Posts (Atom)