Evaluating coded solutions

Evaluation is testing whether a software solution fulfils user requirements. An evaluation will also suggest improvements to make the coded solution better for the user. Typical evaluation criteria for coded solutions (using our calculator example) would include:

End user needs

What does the end user of the calculator program need to be able to do with the software? On what platform will they use the calculator and how accurate do the answers have to be? For example, consider the difference between a calculator used for GCSE Maths and an embedded calculator for precision guided inter-continental ballistic missiles.


Does the software perform the functions required? Does it have specific facilities? For example, a calculator may have a conversion function for decimal to binary conversions.


How well does the calculator work? Do all functions return accurate results? Are there any limits to the data it can process? Will it handle extremely large/small numbers? This could be carried out by benchmarking against existing calculator software.

Ease of use

How easy is the calculator to use? Is there built-in help? Is an appropriate user interface provided?


Is there support available for the calculator, either inline (e.g., press F1 for help) or via online support for the product?

Compatibility with existing software/hardware

Is the calculator written to run on a specific operating system? New software needs to be compatible with the existing operating system and hardware (computers/peripheral devices).


How does the calculator handle problems? Robust software works well in combination with different hardware and software without crashing.


Costs have to be weighed against benefits the software will bring. These may be about making more money or doing something more efficiently, or with fewer staff hours involved.

Move on to Video