« Previous | Main | Next »

Stephen Fry's In the Beginning was the Nerd

Post categories:

Nick Baker 09:00, Tuesday, 29 September 2009

Nerds

The Western world, with a few notable exceptions, poured billions of dollars into electronic pesticides to defeat the Y2K bug. Only to find that for the most part it could have been defeated by turning the systems off then on again. Shades of the hit C4 comedy The IT Crowd. In reality it's the solution put forward in Stephen Fry's Archive on Four next Saturday by Ross Anderson, Professor of Security Engineering at Cambridge University, a world authority. Here - exclusive to the blog - is the full interview Stephen conducted with Ross on the crisis that fizzled out and the prospects of a real future digital Armageddon:

In order to see this content you need to have both Javascript enabled and Flash installed. Visit BBC Webwise for full instructions

So, why the silence when the bug didn't bite? The answer's in the programme. Politicians, experts and businessmen all profited in status or cash from the threat. In the media - to paraphrase the crime reporters - it bled so it led. In the USA, government brazenly claimed victory for its defeat. In reality, the enemy was almost totally imaginary. But it's useless blaming the great and the good. It was inevitable. We'd been told repeatedly that this brilliant new technology would change the world. Then we were told it could all stop on the stroke of one spookily special midnight. We were the newly addicted, suddenly faced with the prospect that our supply was fatally endangered. There was only one thing we could do. Panic. Then spend millions fixing it. Sorry, that's two things.

Nick Baker is Producer of In the Beginning was the Nerd.

Comments

  • Comment number 1.

    I understand your idea well.I will be back to read more interesting topics as one in this post.

  • Comment number 2.

    The 'problem' wasn't the Y2K bug, as we now know, but the fact that those dealing with the issue didn't know if there was a problem (at an individual machine/computer level) or what the problems would be if there was a problem. Yes, perhaps the Y2K bug was hyped but the "I told you that nothing would happen" response has been well and truly over-hyped, but heck it makes good radio - hindsight is a wonderful thing...

  • Comment number 3.

    This comment was removed because the moderators found it broke the house rules. Explain.

  • Comment number 4.

    As someone who spent three years, on and off, fixing the computer systems of a major British bank to MAKE SURE that it didn't fail after the Y2K switchover, I resent the suggestion that the work was unnecessary. The bug failed to have any major impact because people like me spent a lot of time fixing it. I can assure you that major banking systems, notably those which calculated interest payments, would have failed catastrophically if nothing had been done. We first encountered the problem when a charity with a 125-year peppercorn mortgage slipped into it's 100th year of payment around 1996. That caused total loans systems failure for a whole week and gave us a heads up about just how serious Y2K would be if we didn't hunker down and fix it. To have several years of work dismissed as "unnecessary" because WE FIXED THE PROBLEM and therefore the problem was never encountered by the common journalist, will encourage a dreadful culture of neglect. Stephen Fry should be ashamed to have his name attached to any suggestion that Y2K bugfix work was unnecessary.

  • Comment number 5.

    aoakley is right. I too worked on Y2K projects and can testify to the fact that major chaos would have occurred if time and money had not been invested to avert the problem. Maybe not planes falling from the skies and nuclear missiles being launched in error, but certainly substantial disruption to the economy.

    This whole issue is an example of mankind's tendency to swing from one extreme to the other with these issues. The problem was never as severe as some pople made out (end of civilisation etc.), but now, after the event, having successfully avoided 99.9% of the trouble that could have occurred through concerted effort and ingenuity, we're inclined to believe it was all a conspiracy and the problem never existed in the first place .

    The trouble with this is that the next time there is a pending crisis (global warming? species extinction?) people will pooh-pooh it as just another Y2K. That could be very dangerous.

    It's a kind of variation on the crying wolf story, only in this instance, the wolf did exist, but was successfully repulsed. Should we therefore ignore the next wolf?

  • Comment number 6.

    I was going around in 1999 talking about the Y2K issue and yes, I did say "don't worry too much" - my point being that if a computing device has no way to set the time/date, then it will have no idea whether it has changed to 2000, thus no problem.

    I was often faced with so-called experts and people from other vendors who seemed to use this as an excuse to upgrade people to the latest version of code by saying "we can't test all our old versions" - cynically building up the problem and business.

    In particular, I spoke in front of many health professionals who had heard about medical kit that would stop dripping medicines that should be dripped each minute - yet all much of this does is count 60 seconds and then drip, with no knowledge of time or date - sadly they often preferred to "pay safe" and upgrade than think logically, as being wrong wasn't an option.

  • Comment number 7.

    nhawthorn with your drip example you just don't know how the control software has been written. Just to say that all it does is count 60 seconds ignores the fact that in Software there is more than one way to skin a cat. The control software for the drips could have come from a more sophisticated system which had internal monitoring and data logging that was just turned off for this product. I can come up with about 10 different ways of triggering after 60 seconds and a few of them do require an external clock which would have been potentially vulnerable. Just because you can't set the time and date doesn't mean that the system hasn't got a factory set internal clock for timing purposes.

    Until we looked prior to Y2K nobody knew if a device would work or not at midnight on Jan 1st 2000. I know some organisations just bit the bullet and bought new equipment that had been tested but most of this was really just advance and held over equipment budgets from previous years.

    I and many others at the time found and fixed many bugs relating to Y2K and it was only due to hard work that prevented these errors from occurring.

  • Comment number 8.

    I worked for a medium sized company in London at the time. Our accounts dept used a well known payroll software package. Come Jan 1st 2001 it stopped working, the manufacturer had to send out a patch to fix it. For us the 2YK bug turned out to be real.

    One reason I think many think it was a wholly false scare was they thought their MS Office apps would be affected, but these were designed recently enough that MS had thought ahead and taken the Millenium into account. It was the big old mainframe (often bespoke) stuff that was mainly affectd.

  • Comment number 9.

    This comment was removed because the moderators found it broke the house rules. Explain.

  • Comment number 10.

    Yet people STILL treat anyone who expresses healthy scepticism towards Twitter or Facebook as luddites or 'sad' [i.e. pitiful..] or worse.

    It is worth reading 'Flat Earth News' for a good discussion about the media's role in spreading the Y2K hype - after all, have to keep those technology advertisers happy..

  • Comment number 11.

    Like a few other people here, I too worked on a number of systems in the late 1990's to ensure that they would be ok on 1st Jan 20000. Also, like some others here, I resent the fact that the Y2K was a myth. There were plenty of systems that would have failed had a lot of people not worked to avoid this.

    Personally, having worked commercially in software development since the early 1980's, I have never and have still not, written any software that would fail at the turn of the century.

    However, it is very interesting to note how quickly people forget; am I the only one in IT who has noticed that we do not seem to have learnt the lesson of this as the two-digit year seems to be creeping back into IT systems!

  • Comment number 12.

    Aoakley has a very valid point about banks – among all the institutions involved in computing, banks have huge experience and expertise, for obvious reasons. When you hear the programme you’ll hear one of the first radio encounters with a banking computer was on “Have a Go” in the 1960’s. Interesting that aoakley was working on bug fixes for 3 years. We know that banking systems underwent constant upgrades over thirty years up to 2000, including on Y2K. Even then, as you’ll hear in the programme, there was a scare about swipe systems in December 1999.

    However, the multi-billion dollar question is: How much of the Y2K work and expenditure across all systems, private and public was necessary and how much was due to over-reaction, for whatever reason? How much was spent? These are questions never answered, or even addressed after the event. Perhaps because millennia are so, well, far apart. We don’t need to plan for the next one, although some of the lessons learned in 1999 are in Ross Anderson’s interview here. He’s the man BBC news interview when there’s talk of a war like internet attack from another country.

    How big and bad, exactly, was the wolf Petomane refers to, and was the fact he was going to spookily appear at midnight 1999, in the middle of the party, part of our reaction?

    Boilerplated puts it very well: “The 'problem' wasn't the Y2K bug, as we now know, but the fact that those dealing with the issue didn't know if there was a problem… or what the problems would be if there was a problem.”

  • Comment number 13.

    Nick Baker, the programme's producer, left the comment above. Testbed is the name of the independent production company that made the programme for Radio 4.

    Steve Bowbrick, editor, Radio 4 blog

  • Comment number 14.

    #12. At 4:39pm on 01 Oct 2009, testbed wrote:

    "Boilerplated puts it very well:"

    [FX]Blush...[/FX]

  • Comment number 15.

    Looking forward to this! Unlike the knowledgeable folks who had serious responsibilities when rollover was approaching we just altered the RTC setting on our 2 (two!) PCs and rebooted to check that stuff was still running under Win98, a few months before the day. We were not using MS Office, rather Star Office, now available freely as OpenOffice.Org, thank goodness.
    But.... I was at a meeting in Santa Clara some 20 years earlier where the decision to only put 2 digits into the RTC (Real Time Clock) chip used by IBM in the PC was ratified.
    I can recall the mirth when the 2-digit issue was raised. 1999 was two decades in the future, and the chip industry was then only a few years old. We wre pretty young, too!
    Product life was brief then as now - as process technologies moved fast and new things became feasible.
    Happy days indeed. Ben


  • Comment number 16.

    lordBeddGelert is right - Flat Earth News by Nick Davies is very interesting on the Y2K affair. In it he says that the US State Department advised would-be travellers to certain Eastern European countries, who recklessly ignored the threat. Russia, says Davies, spent less than a single company - British Airways.

  • Comment number 17.

    Yes, lots of problems were found in the run up to 2000 AND SOLVED. I know of an insurance company who realised there was a problem and did the work something like fifteen years earlier. Certainly, the scare meant that some big projects were sold when small review projects may have sufficed, but there was a problem. I also know a computer security contractor who said his staff had to go out on new year's day 2000 to sort out real, urgent threats.

    Criticism of the government/media-advertised apocalypse is valid, but it's no different from a dozen such insults which go out every week.

  • Comment number 18.

    Y2K/Millennium bug was not a bug, nor was it a fault. Using two digits to save in databases saved millions in expensive storage. The fault, if any, was assuming that systems using dates or time based upon internal clocks would still be running by 2000. Systems where upgraded every 18 months and hardware was dropping in price.

    I started to code in a switch from 1989. When external systems where upgraded I had to change one byte and recompile. In 92 I was laughed at for suggesting that we code in the century - no one thought the programs or data would still be running.

    At one senior level meeting in 1996, at one of the UK's largest companies, someone furious of the costs shouted out "What idiot set these deadlines?"

    Many systems where only upgraded to get around, but not completely solving the problem, with dates time stamps stored as a 32 bit integer from 1970. Unix 32-bit systems for example have already started to hit this 19th January 2038 problem.

    Many control systems use chips that have dates that stop any time over the next few years. May be they will not be in use, may be no one will be bothered but many will be confused when it happens to them.

  • Comment number 19.

    Was the cost of fixing the Y2K too high? I wonder how many changes to the systems where undertaken under the budget and banner of investigation into the millennium bug. Getting extra budget for changes was easier if the request was worded around Y2K and would have been rejected otherwise.

  • Comment number 20.

    Interestingly http://www.guardian.co.uk/technology/blog/2009/oct/05/stephen-fry-y2k the the Guardian's tech guy was in short trousers at the time of the bug, and is therefore not able to give an opinion. Ours was a long-trousered view. Subminiature - was the cost too high? Nobody knows how much was spent. Hundreds of millions. But many argue that most of the work has been done already. Just before the weekend I spoke to another expert - with expertise in industry, academia and currently heading up a British University's computer security research unit - he felt then and feels now there was a huge amount of hype.

  • Comment number 21.

    Subminiture is absolutely correct about the cost of storing 4 digit years, but it wasn't just the cost of storage. When I started commerical programming in 1984, I was limited to file buffers of 512 characters, and this on one of the most popular mini-computers of the time - not to mention the fact the program and data had to fit in 28K maximum. It wasn't until the 90s that people stopped having to think about how they packed data into the limited resources of the time. You could argue that not having to worry has created a lot of the bloated software of today.

    As others said, a lot of work was done so that there was not a problem - there definately were things which needed changing in code.

  • Comment number 22.

    My favourite Y2K story was the KYJelly 'press release':

    The manufacturers of KY Jelly have announced that their product is now fully Year 2000 compliant. In the light of this they have now renamed it as: 'Y2KY Jelly'.

    Said a spokesman: "The main benefit of this revision to our product is that you can now insert four digits into your date instead of two." :-)

  • Comment number 23.

    Good programme, thanks!
    But there seems to be a misunderstanding that it was just lazy coding that was at the root of the problem - not bothering to store the year number with more than two digits - well it was (of course) more complex than that...

    The MM58167 Real Time Clock chip was specified for the original IBM PC, and it only had two digits in the 'Year' register. This clock and a bunch of discrete logic devices were subsumed in 'Jungle' arrays, now standardised as Northbridge (faster bits) and the slower Southbridge which incorporates a pretty faithful implementation of the MM58167 - all in the name of compatibility with older machines and older software.

    So there was never a full year number to work with - unless some software did it independantly in the BIOS firmware or main OS. Not forgetting that some aftermarket programs may well access the hardware directly, or through a low level system call.

    So even if your computer maker or Operating System provider had done a good job and trapped the Y2K error in software, it was perfectly possible that some custom aftermarket software could bypass these precautions and talk directly to the image of the MM58167, which only had two digits.

    As I have said elsewhere, there was mirth at the late '70s design review meeting in Santa Clara when the design group mentioned that there were 'only' two digits in the year register....

    The whole industry was about a decade old, and it was not expected that any of the chips presented then would still be in use 20 years later, never mind 30!

  • Comment number 24.

    Was there a real problem? Absolutely, and I like many previous commentators worked hard to fix it, in most cases successfully. Lots of commercial programs had, for perfectly sensible commercial reasons, been written using two-digit fields to hold year values and would have stopped running without remedial work.

    Was the problem hyped? Absolutely, there were people trying to convince us that all PCs would stop working on 1/1/2000 (often these people were PC salesmen keen to meet their quotas). In fact, by the late 90s, 80% of PCs (those built in the preceding few years) had been upgraded (BIOS updated) to handle the millennium rollover correctly. 80% of the remainder would roll back to 1/1/1900, require manually resetting one single time, and then continue to work correctly. Almost all the rest would continue to work, but require a manual date change after each boot - apart from a tiny number of truly ancient machines that would effectively be scrap after 1/1/2000.

    Despite this, many organisations (large and small) were persuaded to replace all their PCs. As Subminiature says, the technical staff sometimes colluded in the process in order to achieve what they felt to be a beneficial change that would otherwise have been blocked on cost grounds.

  • Comment number 25.

    I was moved to comment but aoakley and petomane have expressed my opinion very well. I too checked many systems and only one of them was flawless.
    Just as an example, take a system that makes appointments to service gas boilers once a year. A system with the bug would say it was always less than a year since the last service. Not obviously the end of the world, but it is easy to envisage serious consequences.

  • Comment number 26.

    Hi, Nick Baker here again, the programme producer. PompousWindbag - you are right. "Not obviously the end of the world." And elsewhere in the world where they did nothing to defeat the "threat" very little happened.

  • Comment number 27.

    Some of us remember Ross Anderson's views then slightly differently, e.g.

    "Bright new dawn? Hardly

    Why are computer experts heading for the hills with large supplies of food and candles? On the eve of a utilities conference about Y2K, Emma Haughton finds out

    The Guardian, Thursday 25 February 1999

    [...]
    Ross Anderson, a Cambridge University lecturer in computer security, estimates there's a five per cent chance of the bug causing serious disruption, such as power cuts or airport closures. Anderson believes we'll see the first disruptions around September, when systems that work three months ahead like payroll and stock control start to hit problems.

    'People may well then start to panic and stockpile essentials, but with 'just-in-time' methods of production there's very little in the supply chain and we could quickly see shortages.' The prudent will start stockpiling now, he says. 'Computer scientists who really understand the problem are buying a few extra tins or bags of flour every time they go shopping.' So is he actually doing anything to prepare himself, I ask? 'Well, I'm fortunate in that we live in the country, so we can subsist. I've got a wood-burner and calor gas, a stream and a big vegetable garden. And I'll stock up with three months' supply of food.' Three months? 'Three months' supply is sensible, at the cautious end of things,' he says.
    [...]"

    http://www.guardian.co.uk/lifeandstyle/1999/feb/25/consumerpages4

  • Comment number 28.

    Both sides are right.

    (a) Many systems would have failed if they hadn't been fixed. Some of these systems would have caused major inconvenience or economic loss (and perhaps safety issues) if they had gone down.

    (b) A lot of money was spent investigating, fixing, or replacing systems that weren't critical enough to justify the expense, such as classroom computers in schools.

 

More from this blog...

Categories

These are some of the popular topics this blog covers.

BBC © 2014 The BBC is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.