25
Jan

A Very Brief History of Computing 1948-2015 [Jan 12, 2016]

Overview

The world’s first modern computer, in Manchester in 1948, was followed remarkably swiftly by the first business software, but by 1968 software was in crisis and NATO called a conference. The problems were diagnosed, solutions were proposed – and largely ignored. A second Software Crisis was announced in the early 1980’s and again the effective solutions were considered impractical and the practical solutions were largely ineffective. Meanwhile as Moore’s Law predicted, hardware costs continued to fall exponentially, making software systems ubiquitous and leading to a third software crisis, this time of cybersecurity.

Lecture Date: Tuesday, 12 January 2016 – 6:00pm at The Museum of London, 150 London Wall, London, EC2Y 5HN

 

DOWNLOADS:

Lecture Transcript (Word file)Lecture Slides (PPT)View/add Comments

8 Responses

  1. Nils Eivind Bjørnerud

    Dear Professor Thomas,

    You referred several times to the transcripts and I’d be delighted to dive into the further details and references there, but I can’t seem to locate them – are they on their way?

    Thank you for your enlightening lectures!

    Regards

  2. Ashish Padman

    Thank you for this excellent lecture. Being part of the (enterprise) Software industry as a developer/consultant for a little over a decade , it’s been heeding to the war cry of “Agile!!” for most projects during the past few years from my experience. Although its been a very positive experience overall compared to working with earlier “Water fall” method ; one point stressed in agile manifesto – “Working software over comprehensive documentation” results in minimal or even no formal specifications on many projects except for specific user stories during development. You’ve mentioned in the the lecture that agile doesn’t work with novel/complex systems or if safety/security is essential. Can you expand a bit on that and your views on agile preferring “Working software over comprehensive documentation”?

    1. There are a lot of good ideas in the agile approach but, unfortunately, some groups who claim to be using “Agile” actually use it as an excuse to be amateurish.

      Of course it’s better to have working software with poor documentation than to have useless software with perfect documentation – but neither of these is acceptable and how could you develop any important software and have good evidence that it works if you start with a minimal specification and don’t document what you have done? We know that “User Stories” leave out an enormous amount of detail about what the system should do (especially what it should do under error conditions). We also know that testing provides weak evidence (at best) that software works (see my first lecture) and that many projects run out of control and end up late, over budget, and under-delivering the required functionality. I’ll address some of the issues in my next lecture “How Can Software be so Hard”.

      Even Kent Beck (of Extreme Programming – XP – fame) has said that XP isn’t suitable for developing critical software whee you need enough evidence to support certification by a third party (avionics and other safety-critical software, for example). Some developers have shown how you can combine formal specification, implementation in SPARK, and “pair programming” where one half of the pair is the SPARK static analysis Examiner. They still achieve nightly builds and a 15 minute turnaround on proving that a change has not broken the system (and yes – I said PROVING not testing.

      I’ll show how this works in lectures on 10 January 2017 and 2 May 2017.

      Next time someone boasts about their system, developed using Agile methods, ask to see their evidence that the system isn’t vulnerable to cyberattacks – or even to failing with a null pointer dereference, a memory corruption or an overflow. Then try not to laugh.

  3. Dr Richard Mellish

    Your path and mine crossed some years ago in “ICSE”, DTI’s Interdepartmental Committee on Software Engineering. The aim of doing something on this subject across Government seemed like a good idea, but my impression is that that forum never achieved much. Why do you think that was? At that time I worked for a part of the Department of Health concerned with safety of medical devices including those controlled by software and I participated in the development of IEC 60601-1-4, which was our best attempt at international consensus for that subject. How successful do you think we were?

    1. ICSE certainly counts as Computing History! That must have been 25 years ago.

      In my opinion, the DoH still has not properly understood the level of avoidable death and injury that is caused each year by badly designed medical equipment. Too often, the equipmant makes it difficult for the operator (usually a nurse) to set the equipment correctly and hard to tell what the settings are. When a patient is injured or killed as a result, the nurse is blamed – with terrible consequences for the nurse as well as for the patient. Swansea Professor Harold Thimbleby has lectured on this at Gresham College. and elsewhere.

      It appears that the standards and certification of computer-based medical equipment are not yet fit for purpose. I would like to find a way to do something about this, to reduce the number of avoidable deaths and injuries.

  4. Many thanks for the brilliant lecture. I remember reading about a debate whether Software Engineering should really be classified as Engineering, or something else!

    Regrettably, I would say that Software Engineering is far cry from other Engineering disciplines in terms of producing quality products.

    I agree with your point that the falling hardware price has led to the emergence of new markets, in which quality software is not priority. In fact, pretty much the same issue can be seen in photography. Cheap cameras have encouraged many people to become photographers without the need to learn the basics of photography. Sadly, this development has kicked many professional photographers out of business.

    Regarding the use of “formal methods”, I totally agree that formal methods such as VDM or Z are far better than plain English (or whatever natural language). However, it has to be stressed that formal methods cannot guarantee accurate requirement elicitation, analysis, etc. That is to say, formal methods need to be complemented by other “proper” tools to develop quality software.

    I look forward to your next lecture.

    1. Thanks. I’m glad you found the lecture interesting.

      Obviously, there are two tasks in writing software: the first is to determine exactly what you want to achieve; the second is to achieve it successfully.

      Formal methods help both tasks. By writing a formal specification, you are compelled to be specific about many things where you might otherwise have been vague, and you are able to perform a lot of checks to find contradictions and omissions that might otherwise have caused serious problems later.

      Then, once you have a formal specification, you can use formal development methods that enable you to prove mathematically that the software you have written implements the formal specification exactly (and you can perform other proofs, for example to prove that there can be no nteger overflows or array bound violations.

      But I agree that you could still develop a formal specification for the wrong problem and them develop perfect software for that wrong problem. You might consider Michael Jackson’s Problem Frames as a way to reduce that risk.

Leave a Reply