BBC BLOGS - Richard Black's Earth Watch
« Previous | Main | Next »

Climate review seeks detachment

Richard Black | 23:41 UK time, Wednesday, 10 March 2010

Venice_in_snowThere's little doubt, I think, that the forthcoming review of the Intergovernmental Panel on Climate Change (IPCC) can make quite a lot of difference to the organisation itself.

(This is the review that was demanded last month by ministers, and whose terms of reference and operating agency the UN has just announced, entrusting the running of it to the Inter-Academy Council, an umbrella body for science academies independent of the UN.)

Many scientists who have served in the IPCC believe its 22-year-old shape is no longer fit for purpose, and have said so publically.

Its chief, Rajendra Pachauri, was talking about the need for an internal review before the UN announced this external one; and it is surely impossible that there is nothing that can be improved in the working practices of an organisation that was conceived before instantaneous electronic distribution of information became the norm and before climate science became the political battleground it is now.

A bigger question is whether the review can have much impact outside the organisation. Will governments be any keener to act on the recommendations of a reformed IPCC? Will the public find its currently rather impenetrable phraseology easier to decipher? Will it be more widely trusted?

It's possible to divide published opinions on the issue into three broad categories: those who are only concerned with getting the message across that man-made climate change is an over-riding threat requiring urgent action, those who are concerned about the issue but are more concerned by what they see as lack of rigour and transparency within the IPCC, and those who are convinced that global warming is a fraud anyway and the IPCC one of the lead swindlers.

Ban_Ki-moonThose in the first group are unlikely to be influenced by the review, even if it eventually contains damning passages.

Those in the third group are unlikely to be swayed by anything praiseworthy; in fact I have e-mails coming in right now that are already assuring me that the review will be a whitewash, which is I suppose a logical conclusion if your frame of reference is that everything about climate change is just a conspiracy.

It's the second group that intrigues me, including as it does some pretty smart and independent-minded people.

Most are yet to comment. One who has, Roger Pielke Jr, describes what we know about the review so far as a "good start", but has some words of caution as well. I'll be watching the blogosphere and the op-ed-o-sphere with interest over the next couple of days to see what other thoughts come up.

One issue that was raised at the UN news conference - who raised it I cannot tell, as I listened to the conference remotely in London - was how independent the scientists on the Inter-Academy Council's review panel will be from the scientists who contributed work to the IPCC in the first place.

It's a natural question to ask. There's clearly a chance that the first people you would think of to take part in such a panel would be the most eminent climate scientists of the day, and they're wholly likely to have been intimately involved with the IPCC at some juncture.

There's also the wider point that some of the institutions involved with the Inter-Academy Council, such as the UK's Royal Society, have taken a very public stance on climate change.

But to assume this will automatically cause problems for the review is, I think, to misunderstand its nature and purpose.

It is not a review of climate science - some would say it ought to be, but it isn't, it's a review of IPCC practice - and it will surely draw more interesting and meaningful conclusions through involving scientists working in completely different fields, with experiences of completely different collating organisations.

They do exist; medicine alone has many. One that provides an interesting comparison is the Cochrane Review process, which aims to provide something analogous to IPCC reports - regular assessments of the evidence base on its chosen subject - but works very differently.

Will the Inter-Academy Council choose to make use of expertise from fields apparently unrelated to climate science? We shall see - and that, perhaps, will be one of the factors that determines how meaningful and visionary the review turns out to be, and how it is eventually perceived.


or register to comment.

  • 501. At 4:50pm on 18 Mar 2010, oldterry2 wrote:

    in 480. SR wrote:
    " The total cloud forcing is around 13W/m^2."

    Are you sure you haven't forgotten a minus sign - even the NASA web site in a section called 'Annual average net cloud radiative forcing' says 'Overall, clouds have the effect of lessening the amount of heating that would otherwise be experienced at Earth's surface'.

    " This is why CO2 is so important, ad WV can be regarded as a positive amplifier, rather than an initiator, of warming."

    Sigh, if anything is an amplifier then it has to be in a mode exhibiting positive feedback; and so because something is preventing the runaway WV effect then either the WV is not acting as an amplifier or another factor is limiting the effect of WV. Either way the assumption that WV can magnify any heating effect is a non starter, so the CO2 effect has to be regarded on its own.

    Complain about this comment

  • 502. At 5:28pm on 18 Mar 2010, RobWansbeck wrote:

    @493, JaneBasingstoke
    Thanks for correcting the link.

    Although Judith Curry may draw a distinction between different types of critics Michael Mann makes no such distinction. From the linked interview he has this to say about Dr. Curry:

    Q. Judith Curry has been an outspoken critic of your work and of a lot of climate researchers in general.
    A. Did you ask Judith to turn over her e-mails from the past three years? Once she does that, then she’s in a position to judge other scientists. Until she does that, she is not in a position to be talking about other scientists. Glass houses. Look, I’ll just say this. I’ve received e-mails from Judith that she would not want to be made public.

    Notice that he does not defend his work but attacks the person.

    Complain about this comment

  • 503. At 9:59pm on 18 Mar 2010, RobWansbeck wrote:

    @498, SR wrote:

    “The proxy data are calibrated against local temperature records to determine the relationship between growth and temperature.”

    Wrong, have you never heard of 'teleconnection'?

    “There is also a verification period, where the reconstruction is 'tested' and compared with another, separate portion of the instrumental record. You can go some way to instilling confidence in your reconstruction by analysing the performance during this time.”
    And what are the R2 numbers for MBH98?

    “What M&M did is basically do the analysis using different proxies to the ones Mann selected”

    They changed one proxy series at a time. That's called a sensitivity analysis.

    “but it has been shown by all the experts in the field that the ones M&M use are not reliable indicators”

    You mean they don't produce a Hockey Stick? The data series used to give the Hockey Stick its shape were never intended to be temperature proxies – the correlation in the calibration period is spurious hence 'hide the decline'.
    And remember some of these unreliable indicators used by M&M were merely up to date versions of series used by Mann.

    “therefore are BOUND to show a low statistical significance in the verification period.”

    What is the R2 statistical significance of MBH98 in the verification period?

    “Of course, the sceptics with little knowledge of the process don't know this.”

    Oh, what blissful ignorance!

    And, to finish with “MULTI-PROXY approaches have yielded the SAME result”

    Upside-down building work.

    Complain about this comment

  • 504. At 08:28am on 19 Mar 2010, bowmanthebard wrote:

    There is a place for fine-tuning in science. Unfortunately, a conceptual vacuum is not the place!

    Fine-tuning is fine-tuning of numbers. It does not involve a change in concepts. For example, before Newton it was generally believed that planets move on something like "tracks". With that assumption in place, fine-tuning observations of planetary positions was misguided. What was needed was a whole new cenceptual "take" on the solar system, which was eventually provided by Newton.

    We know that current climate science is in a much worse position than Keplerian science, because its predictive powers are so much poorer. This is surely the wrong place for mere numerical fine-tuning, because we don't have any good reasons for thinking there is a lawlike connection between older observations and newer ones. In fact the predictive failures strongly suggest that there isn't one at all (which is quite possiblle) or if there is such a connection, it is entirely hidden from us.

    Somewhere or other, a lot of people seem to have got the idea that numerical rigour is sufficient for science, when it often isn't even necessary -- it is certainly no substitute for science.

    Complain about this comment

  • 505. At 1:08pm on 19 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #496

    It's very simple Bowman.

    Your rules would have gagged Kepler. Your rules would also gag many of today's theorists because of the division between theorists and experimenters in many areas of science. Your rules would probably have gagged Feynman.

    And it was you who introduced the "painting by numbers" approach to explaining the scientific method.

    Complain about this comment

  • 506. At 1:21pm on 19 Mar 2010, Dave_oxon wrote:

    Of course mathematical rigour, though necessary in the sciences, is not the same as scientific rigour. The numerical rigour I was referring to in my post #500 (particularly the desirable MC type process which is possible with smaller models) shows the attempt to de-couple the numerical effects of tuning from the science embodied in the purely theoretical aspects of the model.

    This combination of empricism and theory is a necessary (however undesirable) aspect of GCMs, and other branches of science dealing with complex systems. It is necessary as, without it, it is impossible to investigate a complex system. The clue is in the label "complex" - by its very nature it defies description with a "simple" theory.

    Complain about this comment

  • 507. At 1:33pm on 19 Mar 2010, JaneBasingstoke wrote:

    @RobWansbeck #502

    Actually this is the one thing that is incomplete in Judith Curry's analysis. She hasn't picked up on all the different ways the worst of the sceptics can be unpleasant.

    Ironically Stephen McIntyre has, and has consistently distanced himself from it. I think this is because partially because he has seen similar unpleasantness inflicted on respectable sceptics.

    I believe this sort of unpleasantness is corrosively polarising, and damages the credibility of the honest debaters on both sides.

    Complain about this comment

  • 508. At 2:20pm on 19 Mar 2010, bowmanthebard wrote:

    #505 JaneBasingstoke wrote:

    "Your rules would have gagged Kepler."

    Rules? Which rules? I'm against rules.

    Complain about this comment

  • 509. At 2:21pm on 19 Mar 2010, bowmanthebard wrote:

    "it was you who introduced the "painting by numbers" approach to explaining the scientific method."

    I have alays used the term as an expression of my contempt for induction minus all understanding.

    Complain about this comment

  • 510. At 2:28pm on 19 Mar 2010, bowmanthebard wrote:

    Ah wait -- do you mean the "rules of thumb" I mentioned?

    Those are not meant as "procedures that must be followed" so much as quick ways of guessing or approximating some result.

    For example, I would say that as a rule of thumb, the best way to fix a "hung" computer is to switch it off and then switch it back on again. I advise people try that first, but they are free to disenbowel their machine if they like!

    Complain about this comment

  • 511. At 7:36pm on 19 Mar 2010, bowmanthebard wrote:

    #506 Dave_oxon wrote:

    "the science embodied in the purely theoretical aspects of the model."

    Maybe I've been unfair/uncharitable in failing to see the purely theoretical aspects of the model. If so, I'm sorry, because I'm genuinely interested. But what would a "purely theoretical aspect" of a model look like, if not a simple, clear, albeit risky explanation of something we didn't understand before?

    Complain about this comment

  • 512. At 11:24pm on 19 Mar 2010, thePedantsRrevolting wrote:

    In physics we have models too, we like our models to predict things then the experimental physicists go out and measure whether the prediction is right. If the prediction is unique to your model and the answer’s right then your model is probably right. The problem with climate science is that the models predict things in 50 or 100 years time so there’s nothing to measure now. I want to tell them, come back in 50 years when some of your model’s predictions have been tested but there’s another problem, their models say bad things will happen unless we do something now, that’s the climate science dilemma.
    We’re left with looking at what they do and asking, Is it good Science, are they right?, this worries me because it’s not always easy to tell.
    We publish science in reputable peer reviewed journals, this is not a great system but it does get rid of a lot of Junk Science. I look at the amount of articles that say they’re right and the number that say they’re wrong and the right pile is much bigger but that isn‘t proof. When Milliken measured the charge of an electron he got the answer a bit wrong, the scientists who measured it soon after him were keen to get the same answer and they did. It was only some time later the error was corrected so a bigger right pile isn’t proof though it may be an indicator.
    If you’re doing science that can’t be proved yet then integrity is really important, you need to lean over backwards, you should report everything that you think might make your work invalid, not only what you think is right about it. Details that could throw doubt on your interpretation must be given, If you know anything at all wrong, or possibly wrong you have to explain it. sometimes climate scientists haven’t done this and that worries me too.
    Then you have huge IPCC reports which only put the case for global warming and have been shown to have some exaggerations in them. The IPCC owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources. Reality must take precedence over public relations.
    What about the other side, the people who say Global warming is pseudo science. They have lots of arguments but they don‘t, or very few of them, seem to do any experiments, this makes me suspicious you would think they would be desperate to do their own experiments to prove their case. Of course there’s nothing wrong with Skepticism, Skepticism is essential in science, without it nobody would do anything new. You just have to make sure, if you're going to accept their arguements, that their kind of Skepticism is an “attitude of uncertainty” and not “denial of facts“.
    You want an answer and I can’t give you one but if you force me to gamble, then I’d bet on the ones who at least try to do science before I’d bet on those who just talk about it but, if you want it to be a safe bet, then you need to force them to be honest with themselves and with you.

    Complain about this comment

  • 513. At 12:50pm on 20 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #509

    OK. Misunderstanding. I thought your "painting by numbers" was referring to the scientific method algorithms in #461 and #489.

    Complain about this comment

  • 514. At 12:53pm on 20 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #508

    "Rules? Which rules?"

    You were laying down the law as to what constitutes scientific method.

    "I'm against rules."

    I'm not. I'm for rules, including meta-rules about rules. Well designed rules make the world a better place. Rules allow us to do things which would otherwise be impossible, including language and the internet.

    Here are some more rules that I like (scroll down to XXIX), although tragically it can be difficult to get the Government to stick to them.

    I am however against excessive rules, excessive enforcement of rules, lopsided rules, and rules that offend my sense of fairness, proportionality and justice.

    Complain about this comment

  • 515. At 2:12pm on 20 Mar 2010, bowmanthebard wrote:

    #514 JaneBasingstoke wrote:

    You were laying down the law as to what constitutes scientific method.

    I didn't mean to lay down the law, nor to prescribe scientific method, and I apologize if I came across that way. I'm exaggerating a bit when I say I'm "against rules", but I wasn't exaggerating all that much when I said I was with Feyerabend in being "against method" in general. There is much to be said for "blowing a trumpet at the tulips every morning just to see what happens". Science requires creativity and inspiration and playfulness -- even the day-to-day "problem solving" (as Kuhn described it) requires them -- and algorithms stifle all of them. Algorithms are for computers, not people -- and especially not for scientists or artists.

    But I was being critical of what I consider bad methodology. I was trying to point out something vitally important, something that is probably too big an issue to do more than point at here. My rejection of inductivism in science is a part of a much bigger rejection of "foundationalist" epistemology.

    The idea that I am rejecting -- and have devoted much of my working life to rejecting -- is the nearly universal idea that we have knowledge when our beliefs "rest on secure foundations". That idea has been around since Plato's time (and probably much earlier) and it's wrong. That is the real culprit behind inductivism. These scientists think the way to get scientific knowledge is to start off with observations, because that's the way to "lay the foundations". It's "ironic" that people who are so dismissive of philosophy are guided by the mistakes of the philosophers they haven't bothered to think about.

    If you are interested in finding out more about foundationalism and what's wrong with it, there are some very good books available. I strongly recommend Quine and Ullian's The Web of Belief. There is a (rather garbled) version of it available online somewhere, which will give you a flavour of the way it's written. The online version has many misprints (from scanning OCR), as I recall, and possibly some gaps, but the idea is just to get a taste. Then you can order the printed copy and become a convert. (Just kidding!)

    Complain about this comment

  • 516. At 2:12pm on 22 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #515

    I am unable to answer to all the points raised in your #515 without some serious reading of the source you ask me to look at.

    However I do need to point out that the aspects of the scientific method covered by the algorithm in your #461 and subsequent discussion were not those aspects of science involving creativity.

    Creativity would cover actually coming up with the hypothesis and actually designing any tests, including any method of making relevant observations. It is complementary to any "scientific method" algorithm, not instead of.

    (As a minor point, design/discovery of algorithms can be "creative", as can be design/discovery of ways of implementing them.)

    It also seems a little confusing for your post #461 to clearly outline an algorithm and for you to then deny the relevancy of algorithms to scientific method.

    Meanwhile here is another description of the scientific method. The description is clearly algorithmic. (Link posted for your amusement, I expect you know the material.):

    Complain about this comment

  • 517. At 4:39pm on 22 Mar 2010, Dave_oxon wrote:


    Many processes within the models are represented by their respective mathematical models of the physical processes involved. For a specific example I refer you to the paper of Hansen et al describing one of the earlier versions of their GCM:
    "Efficient three-dimensional global models for climate studies: Models I and II", Hansen et al, Monthly Weather Review, 111(4) pp609

    (available to download as a pdf if you want to search for it!)

    In this paper, for example, the radiation absorption terms are given by equations 8-11 and represent the physics of radiation absorption by certain gases. (These equations are themselves based on a theoretical distribution function). Of course there are coefficients for each gas which vary with wavelength, pressure, temperature - these are measureable quantities (in the same way that the constant G in Newton's equation of gravitation is a measureable quantity) and as such are not subject to tuning. The point is that radiation absorption in the model has a basis in the theory of radiation absorption. I chose this particular example specifically to illustrate how the complexity of the theory of radiation absorption compares to the simple calculations of CO2 radiation absorption that have been discussed at length on this blog. The logarithmic dependence of absorption is, of course, well known (and it's really not that simple!) and there's a fair amount of integration over the distribution functions to carry out as well!

    Conversely, equations 12 and 13 specify a paramaterisation for the ocean albedo and radiative emissivity which are not well understood. This is a part of the model which may be subject to tuning and is therefore subject to the investigations and caveats mentioned in my post #500.

    It is the combination of these two types of calculation in the model which lead to the phrase "semi-empirical" to describe them. It is the inclusion of the former which gives the entire model it's basis in theory and hence its falsifiability. The inclusion of the latter, though undesirable, is unfortunately necessary to yield any result that may be measured at some future date and hence provide the "test" required by the scientific method.

    Hope this has helped a little to elucidate the inner working of complex modelling.

    Complain about this comment

  • 518. At 5:55pm on 22 Mar 2010, bowmanthebard wrote:

    #516 JaneBasingstoke wrote:

    "It also seems a little confusing for your post #461 to clearly outline an algorithm and for you to then deny the relevancy of algorithms to scientific method."

    I would strenuously deny that my message #461 contains an algorithm!

    What I understand by an "algorithm" is a set of rules or instructions for performing some task or other. For example: "beat the eggs, add the flour, put it into the oven."

    My message #461 contains a couple of deductively valid logical arguments, but no algorithms. The arguments were meant to clarify the logical relations between hypotheses (H) and observations (O) -- i.e. what implies what -- rather than describing any methodology -- i.e. what to do. My overall point being that there should be less methodological rigidity in science, and more logical rigour.

    But I accept the arguments introduce some loose constraints on what can reasonably be done in science. Analogously, Pythagoras's theorem tells us how the lengths of triangle-sides are related, but it doesn't consist of a set of rules or instructions for drawing triangles. At most, it can work as a constraint on that sort of activity. For example, if someone started to divide up his garden into triangles which he hoped were right-angled, say, by measuring 3 on one side, 4 on the next side, then (wrongly) 6 on the last side, we could interrupt him by saying "stop -- that third side has to have length 5 if you want a right-angle on the corner opposite!"

    That was really all my message #461 trying to achieve.

    Thanks for the link to the Feynman clip. I probably have seen it before somewhere, but it was good to see it again as I'm very fond of him.

    By the way, I'm a bit surprised you say he describes "another" method, as what he says is pretty much exactly what I've been saying. His "method" is hardly a rigid algorithm either, as he stresses the essential element of guessing. He also stresses the logical significance of failing a test.

    Complain about this comment

  • 519. At 1:41pm on 23 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #518

    "I would strenuously deny that my message #461 contains an algorithm!"

    I checked for a dictionary definition of "algorithm". The basic definition seems to be "mathematical / computational / logical procedure for solving a problem or group of related problems".

    I am not sure whether this covers your extremely incomplete recipe (for cake? for soufflé?). Recipes don't automatically involve problem solving or decisions at the level at which they are written unless Heston Blumenthal has something to do with it.

    However I think the scientific method does, particularly in the format that you present it in #461. It involves logic based decisions with the ultimate aim of problem solving. Emphasise the logic based decisions and you emphasise the algorithmic nature of the scientific method.

    If you object on the grounds that you think algorithms can be carried out by a comparatively mindless operator, then I point out that that particular restriction only applies to relatively low level algorithms. The scientific method is definitely high level.

    "By the way, I'm a bit surprised you say he describes "another" method"

    Your eyes are playing tricks. Another description of the scientific method.

    "link to the Feynman clip"

    More Feynman clips here. (Christopher J Sykes is a TV producer.)

    Complain about this comment

  • 520. At 4:26pm on 23 Mar 2010, bowmanthebard wrote:

    "Emphasise the logic based decisions and you emphasise the algorithmic nature of the scientific method."

    I couldn't disagree more! This may be the core of our disagreement and one of the main differences between inductivism and the alternative as described by Feynman (and myself).

    Inductivists assume that implication is basically a one-way street, and that scientific reasoning essentially follows this pattern:

    If A then B
    Therefore B

    Inductivists assume that A in the argument form above consists of a series of observations O1, O2, O3, whose conjunction implies the theory T like this:

    If O1 & O2 & O3... then T
    O1 & O2 & O3...
    Therefore T

    Perhaps that looks like an algorithm to some. It isn't an algorithm, but I can see how some people might confuse it for an algorithm.

    The alternative, "hypothetico-deductive" pattern as sketched by Feynman assumes that implication is a two-way street, with the most important "traffic" going the "wrong way" up that "street" whenever the world answers "no":

    If H then O
    Therefore not-H

    As he put it, when a test gives an unfavourable result the theory is wrong. (It isn't quite as simple as that, but we can assume that simplicity for the sake of clarity.)

    The inductivist may (mis)understand his argument as embodying an algorithm, but the most important argument form for the non-inductivist is nothing like an algorithm. It expresses purely logical relations between hypotheses and observations, and doesn't consist of rules telling anyone what to do. There can be no confusion that it simply expresses the fact that if a hypothesis implies something, which happens to be false, then the hypothesis must be false as well.

    Complain about this comment

  • 521. At 10:26am on 24 Mar 2010, Dave_oxon wrote:

    @ Jane & Bowman

    Just wanted to direct you, in case you've never found it before, to do a search for "project tuva feynman lectures" using the search engine of your choice. (I don't want to link directly to the site in case it contravenes house rules).

    This should direct you to the Microsoft research site where you can watch, in full, the great man himself giving 6 of his famed lectures.

    Complain about this comment

  • 522. At 6:34pm on 24 Mar 2010, bowmanthebard wrote:

    Great link! -- Thanks!

    The issues raised in Feynman's The Character of Physical Law lectures are right at the heart of our disagreement.

    If I may presume to add to Feymnman's own insights: by 'law' we can mean the real regularity in nature itself, or the expression in language we humans construct to describe the real regularity. Only when we are in possession of the latter (thanks to Newton, Kepler, et al) can we hope to predict nature reliably. For that, we need to use the "method" of "guessing and testing" rather than modelling.

    A model can be useful if we are already in possession of the (human linguistic) laws, because it can help us calculate what the consequences of the laws are, how they work together, and so on. But a model will not help us to disvover such laws, nor will a model act as a substitute for the laws.

    Complain about this comment

  • 523. At 2:46pm on 25 Mar 2010, bowmanthebard wrote:

    If I had to sum up my position in a single sentence, I'd say this:

    If we are to predict anything we need laws, and laws are simple.

    The following words are taken from the end of Feynman's first Messenger lecture On the Character of Physical Law:

    "But the most impressive fact is that gravity is simple: it is simple to state the principle completely, and have not left any vagueness for anybody to change the ideas about it. It's simple and therefore it's beautiful.

    It's simple in its pattern. I don’t mean it's simple in its action: the motions of the various planets and the perturbations of one on another can be quite complicated to work out; or to follow how all those stars in the globular cluster move -- is quite beyond our ability. It's complicated in its actions, but not in the basic pattern or the system underneath the whole thing, as that's a simple thing.

    That's common to all our laws: they all turn out to be simple things but quite complex in their actual actions."

    Complain about this comment

  • 524. At 6:38pm on 25 Mar 2010, JaneBasingstoke wrote:

    @Dave_oxon #521

    Ta for that.

    Complain about this comment

  • 525. At 6:39pm on 25 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #520

    "Algorithms" again.


    You appear to be conflating the semantics of algorithms with your concerns about inductivism.

    Algorithms can help illustrate different approaches to the scientific method. Whether or not the scientific method can be presented algorithmically does not affect whether or not there is a problem with inductivism in science.

    And I repeat. High level algorithms are not mindless. Perhaps I should also point out that algorithms don't stop being algorithms if there is a bug in them.


    What is your problem with me describing your presentation of the scientific method as an algorithm? How is it not a procedure based on logic? How is it not problem solving?

    Complain about this comment

  • 526. At 07:43am on 26 Mar 2010, bowmanthebard wrote:

    #525 JaneBasingstoke wrote:

    "You appear to be conflating the semantics of algorithms with your concerns about inductivism."

    The semantics of the word 'algorithm' don't interest me much -- I just want to be clear on what we're talking about. An algorithm is a set of ordered instructions, each step of which is to be performed by a human or a machine in the correct order, like a cooking recipe or a program.

    An argument (as used in logic) doesn't do that at all. It does usually contain numbered premises and statements, but there the resemblance ends as these are not steps to be performed. They are claims which are true or false, and their truth or falsity can be ascertained by the form of the argument.

    "Algorithms can help illustrate different approaches to the scientific method."

    The only methods than can be presented algorithmically are those that follow some algorithm or other. That rules out any that don't, which begs the question against those that don't.

    "Whether or not the scientific method can be presented algorithmically does not affect whether or not there is a problem with inductivism in science."

    The assumption that science used induction as its basic form of reasoning is intimately connected with the assumption that it follows an algorithm. Both ignore the creative aspects of science, which include guesswork, the construction of experiments for testing guesses, and problem-solving, which nearly always involves "lateral thinking".

    "And I repeat. High level algorithms are not mindless."

    I'm not saying algorithms are completely mindless, but following them involves no creativity. Making them up can be tricky!

    "How is it not a procedure based on logic?"

    If by "based on" logic you mean "constrained by" logic, fine. But this constraint is not the step-by-step instruction of an algorithm. Analogously, music is constrained by the rules of harmony, but there are no algorithms for writing music.

    Complain about this comment

  • 527. At 1:12pm on 26 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #526

    "Algorithms" again.

    We seem to be looking at subtly different definitions of algorithm. Here's one from the Oxford English Dictionary

    "noun a process or set of rules used in calculations or other problem-solving operations"

    "An argument (as used in logic) doesn't do that at all."

    I didn't say your argument was an algorithm. I said the subject of your argument was an algorithm.

    This is not a pipe.

    Complain about this comment

  • 528. At 2:33pm on 26 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #526
    (@myself #527)

    Second half of my #527 has a colon-in-link problem. Here it is with the colon fixed.

    "An argument (as used in logic) doesn't do that at all."

    I didn't say your argument was an algorithm. I said the subject of your argument was an algorithm.

    This is not a pipe.

    Complain about this comment

  • 529. At 2:58pm on 26 Mar 2010, bowmanthebard wrote:

    #527 JaneBasingstoke wrote:

    "I said the subject of your argument was an algorithm."

    Which argument did you have in mind? I can't see this in any of the arguments I wrote out in this blog.

    There is more than one kind of conditional claim (such as 'If H then O'), but most are understood as expressing a (more or less complicated) fact about the world rather than telling people what to do, if that is what you had in mind.

    Complain about this comment

  • 530. At 4:19pm on 26 Mar 2010, JaneBasingstoke wrote:

    @bowmanthebard #528

    Er, I thought that your #461 was an argument about the scientific method. I thought your #520 was also an argument about the scientific method. I thought your "If H then" logic were examples of how you thought the scientific method should or should not work (according to content).

    Complain about this comment

  • 531. At 5:27pm on 26 Mar 2010, bowmanthebard wrote:

    #530 JaneBasingstoke wrote:

    "Er, I thought that your #461 was an argument about the scientific method."

    Yeah, but what makes you think I'm proposing an algorithm? Analogously, one can criticize works of art created by "painting by numbers" without proposing another version of the same thing -- "painting by letters", or whatever. Algorithms are for computers and beginner cooks, not for scientists or artists.

    I think the methods of inductivism stink, partly because they rigidly follow an algorithm. But more importantly, I think the logic of inductivism is hopelessly confused. The logical arguments I have presented in this blog are intended to show why.

    In my experience, most genuine scientists recognize the pattern I sketched and agree that that's how the logic of scientific discovery works.

    The main point is that an argument is not an algorithm, even though both may consist of numbered sentences.

    Complain about this comment

  • 532. At 07:59am on 27 Mar 2010, bowmanthebard wrote:

    To carry analogy still further, good cooking makes for good food. A person judging the goodness of food is guided by the food's taste more than by the procedure that was followed to create the finished dish.

    A food judge might advise a cook not to follow a recipe slavishly but instead to "be adventurous, and make sure you taste it from time to time". That is a bit of methodological advice, I accept, but it isn't an ordered set of rules to be rigidly followed like an explicit recipe. It's almost an "anti-recipe".

    One might give similar methodological advice to scientists: "be imaginative, and make sure you test it from time to time". The testing is important if you want to end up with something that is true rather than something that merely fits prior data. But this advice doesn't say anything about how to test it, or how to be imaginative. It's almost an "anti-algorithm".

    (Yeah, it's Masterchef season!)

    Complain about this comment

View these comments in RSS


Sign in

BBC navigation

BBC © 2014 The BBC is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.