Project Extracts

Extracts from the report of a project done by Y.P.K. Teoh in 2000-1

Summary

This project attempts to emulate human thought by using artificial intelligence to annotate chess games.

The annotation module comprises of two logical phases. The first phase involves intensive computation on the search-tree of chess positions and reveals the short-term, tactical points. The second phase is more of a black art: the program tries to recognise strategic points by comparing the game with the program's bank of special cases.

We can enhance the performance of the program in two ways. Firstly, a more powerful processor will compute deeper into the search-tree; secondly a more comprehensive collection of special-cases will better mimic human experience. The challenge remains to integrate and translate these diverse results into a form intelligible to the human reader.

The project has successfully implemented a program that produces remarkably good chess annotations, making use of both 'brute-force' search-tree techniques and other intuitive methods. The results obtained are encouraging and suggest that a similar framework can be applied to a program that plays chess.

Table of Contents

5. Discussion and Conclusion

Future Exploration and Development

Many aspects of this project have been simplified to save on computational costs. Some of these are worth exploring and refining in theory, so that they can be implemented in practice when the hardware permits. Fitting the Annotation Model into a Generic Chess Engine As described in section 1.5, 'annotate.c' makes use of two other Crafty modules, 'iterate.c' and 'evaluate.c'. The modified tests in 'evaluate.c' are incorporated into 'annotate.c', so the program can function without 'evaluate.c'. The program, however, still needs an engine module, which must be able to do the following things: A chess program with an engine that does all the above will be able to support the annotation module, after a modest amount of code adjustments.

The Annotation Model as a Framework

The annotation model serves as a framework for future developments. In future, more powerful processors will emerge to boost the computers' performance in the search tree analysis. Historically, most of the advancements in chess computing have been made here; computers will (if not already) surpass humans in brute-force calculations. Therefore isolating the search tree analysis from the special case analysis makes it easy to apply our strengths.

The computer's aptitude in hardcore calculation relieves humans of the jobs that require precision. We can harness the computer's exactness in commentating other sports: the gracefulness of an ice-skater's 'spin-up' may be a function of the roundness of the circle traced out by the arms; the co-ordination of a team of synchronised swimmers can be measured exactly rather than impressionistically.

The main challenge is to emulate the aspect of human intelligence that is less exact in nature - the post-game analysis. It is especially difficult to quantify emotions like psychology and pressure, and as of now human editing of the annotations is still necessary. Newell, Shaw and Simon, three pioneer researchers on artificial intelligence, said: "If one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavour."

Shortcomings of the Project

The program has a reasonably big collection of tests for special cases. The difficulty in processing these test results is to integrate them as seamlessly as a human, so that they are suitable for human consumption. The program has difficulty co-ordinating such vastly diverse findings even though it has perfectly capable faculties with which to collect them. Analogously, even the human mind is prone to lapses under severe conditions - paradoxes and optical illusions are some examples where the human mind makes fallacious deductions and perceives artefacts.

The art of chess annotation has much scope for creativity. Like a poem or song, which has technical features that can be assessed exactly (like rhyme and rhythm), there are also merits that must be felt. The program is far from being able to feel, and is certainly not ready to inject wit and humour into its annotations. The most glaring shortcoming of this project (and most others on artificial intelligence) is its inadequacy in replacing the human's capacity for spontaneity. For that reason, chess columnists who annotate for a living have little to fear for the time being.

Conclusion

Ultimately the performance of this artificial intelligence project has to be measured against the human mind. The program branches search trees to a depth of 9 plies, and compares positions with its collection of 150 special cases. An experienced annotator can easily evaluate more than 13 plies and consider thousands of special cases. In this perspective, there is much distance to make up.

A convincing validation of the project's success would be a Turing Test. If a chess columnist generates a piece of annotation and publishes it as his own work without alerting his ardent readers, then that would be evidence of success. However, even if the novice reader is able to gain insights into the game after reading it, then that will also be success in its own right.

A natural conclusion to draw from this project is that computers, being able to use the dual-ideology approach (section 1.2) to annotate chess games, should have comparable results in playing chess. The 'feeling' ideology of chess programming has been forsaken due to its lack of success compared to the 'number-crunching' ideology. However, a marriage of the two might open up a new dimension of computer chess playing.

Players dislike playing computers that never make unsound sacrifices - they like to call a bluff occasionally. Modern chess programs mostly run on the 'number-crunching' framework, and its logic is exact and absolute. By adding a 'feeling' component, we can make the computer exhibit "fuzzy logic." The moves played might no longer be the strongest but at least they will be more human. As such, chess should not be such an exact science - if it were, humans would merely be playing an elaborate form of solitaire.

The dual-ideology approach to annotating chess may well be generalised to applications beyond chess computing. (After all, the pioneers in A.I. had similar intentions when they chose chess as one of the primary fields of research.) We have briefly discussed the parallelisms between the appreciation of chess and other humanly pursuits, and we may one day grasp the very inner workings of human intellect. As the former World Chess Champion, Garry Kasparov said: "If a computer can beat the World Champion, a computer can read the best books in the world, can write the best plays, and can know everything about history and literature and people."

Appendix C - Tests for Context-Specific Material Comments

Appendix D - Tests for Context-Unspecific Material Comments

Appendix E - Tests for Deducing Weaknesses from Chessmaps

Appendix F - Tests for Deducing Zonal Interests from Chessmaps

Appendix G - Tests for Deducing Comments from Piece History


Back to the Computers and Chess page.
Updated June 2001
Tim Love