24
Jan

Should We Trust Computers? [Oct 20, 2015]

Computers and software have transformed the world in 67 years and the pace of change is still accelerating. The achievements have been extraordinary: we have the Web, Google and GPS – but we also have viruses, spam and cybercrime. What can we learn from past triumphs and disasters to help us decide about Big Data, driverless cars, artificial intelligence and life in silico? Might the future be built on sand, metaphorically as well as literally?

Lecture Date: 20 October 2015 – 6:00pm at The Museum of London

 

DOWNLOADS:

Lecture Transcript (Word file)Lecture Slides (PPT)View/add Comments

12 Responses

  1. Mark

    I am what you might call a “craft” programmer. In a future lecture, I would like to hear more about formal methods of software evaluation. Your thoughts on how these methods can be used by programmers like me would also be of interest.

    Thanks,

  2. Harold Fineberg

    I am an ex-IBM programmer and recall the the paradoxical joke ‘How many undiscovered bugs are there in your program?’. The statistics that you quote of faults per KLOC are indeed horrific but it seems to be at odds with my/our everyday experience of using the many software controlled devices. My phone has never failed me, nor my Android tablet. My Windows laptop, containing many millions of lines of code is impressively stable. I am unaware of any failure of the electricity supply caused by a software fault. I have made many airline and hotel bookings via the internet.
    How do you explain the apparent discrepancy between the statistics you quote and these quotidian experiences?

    1. Tony Hoare (the Turing Award winner C A R Hoare) wrote a paper called “How did Software get so reliable without Proof?). It’s a good question.

      The answer seems to be this. Every bug has a “size” that can be defined as the percentage of all possible uses of the software that fail as a result of the bug. Testingis likely to find the biggest bugs first, so if you test and fix until you are happy with the failure rate that remains, them you will have removed the largest bugs but that means that the bugs that remain are increasingly small (and obscure) and only affect people who do something unusual. But there will be very many such bugs remaining and a lot of these will be exploitable as security vulnerabilities.

      AS an ex-IBM programmer, you may recall a paper that studied the customer logs on many IBM MVT mainframes in the 1980s and that showed that a typical fault in the MVT operating system only caused a failure rarely (from memory, about every 100,000 hours of user connect time) but that most users encountered so many failures that MVT was considered unreliable. That’s the same phenomenon.

      The problem now is that the security environment has become very much more hostile, so these obscure, small bugs are a serious problem.

  3. Andras

    Your observations on the importance and safety aspects of cyberspace and computing fully justify your wish to introduce safety engineering rigour into programming. I have limited programming experience, many years out of date. But, I had experience in safety assessment, also many years out of date, having retired in 1992.

    Redundancy, diversity and segregation were the three key words in safety whilst I was employed in safety assessment. I sense no evidence of their application. E.g.(i) one could run the same programme twice and the outputs compared (ii) outputs of different programmes for the same purpose compared (iii) computers in different locations and outputs compared. This rigour is for activities where the hazards and risks are high.
    Programmes are written for use by people who do not understand their workings at all and aim to provide an output even if it is wrong rather than produce nothing. The outputs are not actually checked by any human and are posted automatically by machines. Often they are giberish.
    I sense that modifications and notes added to programmes are done fairly liberally. Should this be more restricted to ensure that programmes can be scanned against a master copy, without functional tests, at least initially ?

    For an engineering approach, programmes should be a compilation of several parts, small enough to be fully understood by a competent programmer and fully tested for all possible combinations of inputs to that part. Inputs to it should be restricted to the verified range. The programmer should be trained to look for errors, as well as for other aspects of programming.

    Programmers should only be allowed to programme in topics which they fully understand, without any computer assistance to speed up the process of doing the work. E.g. for medical applications they should have high level medical or nursing qualifications.

    I hope this is relevant.

    1. I agree that redundancy, diversity and segregation are important architectural principles (I would add low coupling as another). But diversity has its limits (refer to the Knight and Leveson paper for example) and testing will never provide strong evidence for safety (nor for much else).

      I agree that a programming team needs access to deep domain knowledge but I wouldn’t want to insist that the programmer of medical applications should have a medical qualification. I have seen too many medics who think they can write safe software without any qualification in software engineering.

  4. Jaafar Almusaad

    Thank you so much for the brilliant lecture. It was truly inspiring.

    I would argue that a substantial amount of risk is due to the slow adoption of IP v6. As you are aware, the widely used IP v4 wasn’t developed with security in mid (and as you said, it was developed to be resilient, but not to be secure).

    Unfortunately, this critical issue isn’t being adequately discussed among IT professionals.

    From a High Performance Computing perspective, I have observed that the vast majority of existing software fail to optimally utilize computing resources such as multicore/multithreaded CPUs (and co-processors) despite the potential to do so. To make matters worse, many application software appear to scale well (i.e. utilize all cores/threads) but the actual speedup is not even close to leaner. I’ve examined a few applications using Intel Parallel Studio XE and observed that a significant portion of CPU time is spent in synchronization. This, in my opinion is the result of poor software engineering.

    I totally agree with you that software engineering is still far away from maturity. I would argue that the introduction of prototyping tools (such as Microsoft Visual Basic and .NET) have made software development a lot easier (and cheaper), but has also contributed to isolating programmers from the underlying architecture. Many commercial software nowadays are written using these prototyping tools!

    1. I agree with a lot of what you say; It’s inevitable that the software for optimising the use of new architectures is lagging the development of better hardware, and that the take-up of better software (including IPv6) is also slower than would be ideal.

      Your point about prototying is a good one. The agile approach to software development encourages a process of repeated prototyping and devlopers then are under pressure to deliver the final prototype rather than re-engineering it to make it highly dependable.

      Many agile techniques work well with formal methods, but it is essential to know that your early architectural decisions will support a system that has the critical properties that you need – otherwise it will be very costly to correct the early decisions and there will be pressure to compromise on important issues.

      Formal specifications are very cost-effective, unless you are just developing a limited variant on a system that you have built several times before successfully – for example, a simple website such as this one.

    2. Jaafar Almusaad

      Is there a specific formal method you can think of that addresses High Performance Computing challenges from Software Engineering perceptive?
      Thanks

  5. Thanks to everyone for coming to the first #cyberliving lecture last night. You were a great audience, and I enjoyed talking to a few of you after the lecture.

    I’m sorry I didn’t leave time for questions: please ask questions or discuss any of the issues that I raised by using these reply boxes.

    The video should be available in a few days, after editing, so I hope the internet audience will join in here too.

  6. This lecture raises a number of issues that will recur throughout the series; it therefore acts in part as an introduction to the Cyber Living series as a whole. I look forward to learning from you, the audience, which topics you would like to have covered in more detail later, and what issues you would like me to include next year.

Leave a Reply