The automatic assessment of formal specification coursework

Zarina Shukur, Edmund Burke, Eric Foxley

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

THE PURPOSE OF THIS PAPER is to describe a system that is able to grade the quality of a formal specification written in the Z language. It is intended to assess the quality of specifications written by students as coursework for a course in formal specification. The quality is measured by considering two factors: correctness and maintainability. There are two stages in measuring the correctness of a specification. First, the syntax and semantics of the specification must be checked. Second, the level to which the specification satisfies the customer requirements must be determined. In this paper, we animate the specification against various test states of the system. Marks for animation are produced according to the extent to which the specification fulfills the customer requirements. Specification maintainability is divided into two categories. First the typographic arrangement metric derived from the specification will be compared against a range of absolute values. Second, the complexity metrics derived from the specification will be compared against those for a model specification. The performance of the system (on introductory exercises) has been compared with a human marker. The result shows that (at this introductory level) the system and a human marker broadly agreed on the marks given.

Original languageEnglish
Pages (from-to)86-119
Number of pages34
JournalJournal of Computing in Higher Education
Volume11
Issue number1
DOIs
Publication statusPublished - 1999
Externally publishedYes

Fingerprint

customer
syntax
semantics
language
performance
student

Keywords

  • automatic assessment
  • formal specification
  • Z

ASJC Scopus subject areas

  • Education

Cite this

The automatic assessment of formal specification coursework. / Shukur, Zarina; Burke, Edmund; Foxley, Eric.

In: Journal of Computing in Higher Education, Vol. 11, No. 1, 1999, p. 86-119.

Research output: Contribution to journalArticle

Shukur, Zarina ; Burke, Edmund ; Foxley, Eric. / The automatic assessment of formal specification coursework. In: Journal of Computing in Higher Education. 1999 ; Vol. 11, No. 1. pp. 86-119.
@article{30c37b00599c42e6a48ba3301e817978,
title = "The automatic assessment of formal specification coursework",
abstract = "THE PURPOSE OF THIS PAPER is to describe a system that is able to grade the quality of a formal specification written in the Z language. It is intended to assess the quality of specifications written by students as coursework for a course in formal specification. The quality is measured by considering two factors: correctness and maintainability. There are two stages in measuring the correctness of a specification. First, the syntax and semantics of the specification must be checked. Second, the level to which the specification satisfies the customer requirements must be determined. In this paper, we animate the specification against various test states of the system. Marks for animation are produced according to the extent to which the specification fulfills the customer requirements. Specification maintainability is divided into two categories. First the typographic arrangement metric derived from the specification will be compared against a range of absolute values. Second, the complexity metrics derived from the specification will be compared against those for a model specification. The performance of the system (on introductory exercises) has been compared with a human marker. The result shows that (at this introductory level) the system and a human marker broadly agreed on the marks given.",
keywords = "automatic assessment, formal specification, Z",
author = "Zarina Shukur and Edmund Burke and Eric Foxley",
year = "1999",
doi = "10.1007/BF02940843",
language = "English",
volume = "11",
pages = "86--119",
journal = "Journal of Computing in Higher Education",
issn = "1042-1726",
publisher = "Paideia Publishers",
number = "1",

}

TY - JOUR

T1 - The automatic assessment of formal specification coursework

AU - Shukur, Zarina

AU - Burke, Edmund

AU - Foxley, Eric

PY - 1999

Y1 - 1999

N2 - THE PURPOSE OF THIS PAPER is to describe a system that is able to grade the quality of a formal specification written in the Z language. It is intended to assess the quality of specifications written by students as coursework for a course in formal specification. The quality is measured by considering two factors: correctness and maintainability. There are two stages in measuring the correctness of a specification. First, the syntax and semantics of the specification must be checked. Second, the level to which the specification satisfies the customer requirements must be determined. In this paper, we animate the specification against various test states of the system. Marks for animation are produced according to the extent to which the specification fulfills the customer requirements. Specification maintainability is divided into two categories. First the typographic arrangement metric derived from the specification will be compared against a range of absolute values. Second, the complexity metrics derived from the specification will be compared against those for a model specification. The performance of the system (on introductory exercises) has been compared with a human marker. The result shows that (at this introductory level) the system and a human marker broadly agreed on the marks given.

AB - THE PURPOSE OF THIS PAPER is to describe a system that is able to grade the quality of a formal specification written in the Z language. It is intended to assess the quality of specifications written by students as coursework for a course in formal specification. The quality is measured by considering two factors: correctness and maintainability. There are two stages in measuring the correctness of a specification. First, the syntax and semantics of the specification must be checked. Second, the level to which the specification satisfies the customer requirements must be determined. In this paper, we animate the specification against various test states of the system. Marks for animation are produced according to the extent to which the specification fulfills the customer requirements. Specification maintainability is divided into two categories. First the typographic arrangement metric derived from the specification will be compared against a range of absolute values. Second, the complexity metrics derived from the specification will be compared against those for a model specification. The performance of the system (on introductory exercises) has been compared with a human marker. The result shows that (at this introductory level) the system and a human marker broadly agreed on the marks given.

KW - automatic assessment

KW - formal specification

KW - Z

UR - http://www.scopus.com/inward/record.url?scp=84864239840&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84864239840&partnerID=8YFLogxK

U2 - 10.1007/BF02940843

DO - 10.1007/BF02940843

M3 - Article

VL - 11

SP - 86

EP - 119

JO - Journal of Computing in Higher Education

JF - Journal of Computing in Higher Education

SN - 1042-1726

IS - 1

ER -