This documents aims to cover all information concerning the course Software Construction 2010-2011.
(NB: be sure to refresh this page; it may be cached by your browser.)
Lectures and workshops will be from 9:00-11:00 on Mondays. Lectures will be given by Paul Klint, Jurgen Vinju and/or Tijs van der Storm.
Primary contact for this course is .
Lectures will be at Science Park, room C1.112. The lab rooms are A1.20 and A1.22.
Required skills:
Required knowledge:
Pre-conditions for getting a grade:
You will be graded on the following course assignments:
The grade is computed using the following formula: 0.6 * P1 + 0.4 * P2. For both parts a minimum grade of 5.5 is required to pass the course. The practical assignments will be graded on-site on the respective dates of the deadline (see schedule).
Date: 14th of February, 13:00 - 17:00.
Location: lab room.
The goal of the code review workshop is to encourage critical thinking with respect to code quality and low level design. Important questions are: what is code quality? What are relevant quality attributes? How about code smells? How can you improve the design of existing code? Etc.
The structure of the workshop will be as follows:
13:00 The first hour, we will collectively make a list of code attributes that you deem important for code quality.
14:00 short break/formation of teams of two persons.
14:15 Each team member will review the other team member's Oberon-0 implementation, guided by the list of quality attributes. Make notes so that you will be able to provide constructive feedback later. Teacher(s) will be around for assistance and discussion.
15:15: short break
15:30: Provide feedback on the design and code quality to your team member. Be criticial, but constructive: your team member should be able to use your comments to improve his code.
16:30: Closing.
NB Participation is mandatory!
During the course you are required to read McConnell's Code Complete. We will test that you have read the required chapters using small tests consisting of 3 or 4 open questions. The dates of the tests and the accompanying reading assignments are as follows:
The tests will be evaluated on site. There will be no grade, but you must show you have read the required chapters.
Each workshop, two topics will be presented by two teams each. The topics are centered around a thesis concerning the advantage or disadvantage of a certain technique or approach in the domain of software construction. The first team will argue for the thesis. Then, the second team acts as opposition and will give a presentation arguing against the thesis.
Each team consists of two members. Topics are to be selected from the list at the end of this document. The references listed there are required reading for EVERYONE (for the topics discussed in workshops, that is). It is, moreover, required to find at least two other papers related to the position you are defending.
Guidelines for giving the presentation:
Presence during the workshops is required. The presentations will not be graded, but feedback will be provided by the teachers present.
Workshop slots are allocated by sending an email to containing the preferred topic and date, and the two team members' names. For both topic and date holds: first come, first serve.
The goal of the lab assignment is to implement an interpreter for the language Oberon-0. Oberon-0 is a subset of Niklaus Wirth's programming language Oberon, a successor to Modula-2 (which was a successor to Pascal). The ultimate reference for Oberon-0 is the book on compiler construction, by Niklaus Wirth Oberon0. The choice of this language is inspired by a Tool Challenge, currently run in the context of the leading workshop on language implementation tools LDTA. The challenge is to have modular, extensible, concise, and declarative implementations of languages.
For this lab assignment, you are required to use the Java programming language and the Eclipse IDE. Each participant will get access to a Google Code Subversion repository, setup for this course. The URL of the Google Code project is:
Please sign up for a Google account if you haven't done that already, and notify to get a project entry.
IMPORTANT: You are required to complete the lab assignment individually. We will use clone detection tools to detect plagiarism.
The assigment consists of two parts. The first part, Part 1, consists of the following components:
A parser for Oberon-0. For this you will use a parser generator. You can choose from the following Java parser generators: ANTLR, Rats!, JavaCup, JavaCC, JACC, SableCC, Beaver or Grammatica. Depending on your choice you may be required to implement a tokenization phase.
The parsing phase of the language implementation produces an abstract syntax tree (AST). You are required to design a suitable class hierarchy modeling Oberon-0 ASTs.
Finally, the interpreter will run Oberon-0 programs by processing Oberon-0 ASTs.
Oberon-0 is a subset of full Oberon. For Part 1, you are required to include the aspects and features that are described by the grammar in the appendix of Oberon0.
Although Oberon-0 seems like a simple language, you should pay particular attention to the following aspects of the language:
Which keywords are reserved, and which keywords are not?
How to implement pass by reference (cf. the VAR keyword)?
What is the priority and associativity of binary and unary operators?
What are the scoping rules of Oberon-0?
What is the semantics of nested procedures?
In Part 2 you will modify and/or extend your current implementation to implement two new requirements. What the actual additional requirements are will be announced half way into the course. You are strongly encouraged to anticipate changes in the language or required tooling in the design you deliver for Part 1.
As an additional requirement you will have to collect a number of metrics that help to assess the quality of your implementation. You have to provide these metrics at both grading moments.
First, you will report metrics based on the SIG maintainability model:
Number of files, classes, methods, and non-comment, non-blank lines of code (SLOC).
The distribution of cyclomatic complexity across methods (i.e. a map from cyclomatic complexity x to number of methods with that cyclomatic complexity).
The distribution of volume over methods (i.e. a map from method size, measured in SLOC, to number of methods with that size).
These metrics can be derived using the tool JavaNCSS and you are required to use this tool.
The parser implementation uses dedicated grammar syntax that is input the parser generator. For this kind of source files, no metric tools are available. In this case you are required to count manually, the following metrics:
Number of non-terminals.
Number of productions.
The distribution of the number of productions (or alternatives) over non-terminals (i.e. a map from number of alternatives to number of non-terminals).
The distribution of production length (i.e. number of symbols) over productions (i.e. a map from production length to the number of productions with that length).
For each metric you have to distinguish between lexical syntax/tokenization productions and context-free productions.
Second, you are required to collect metrics that may indicate problems in the modular structure of your implementation. For this tool, you will use the JDepend tool.
Some final guidelines:
Exclude packages containing unit tests.
Only include Java source files in the metrics computation by JavaNCSS and JDepend (so no grammar files).
Both JavaNCSS and JDepend are Java programs, so it is possible (and advisable) to automate to computation of metrics. Be sure, however, to exclude such infrastructure from the metric computation itself.
You are strongly advised to minimize Java action code, occurring in grammar productions, as this will skew metrics computation.
You should use the XML output facilities of both tools. This will ease further processing and aggregation.
Great code is easy to change. The additional requirements that will be announced for Part 2 are intended to be able to show how good your initial design of Part 1 will live up to extensions, modifications and revisions. In order to get more insight into this question, at the end of Part 2, you will compute the difference between Part 1 and Part 2. In other words, we are interested in the number of changed classes, added classes, changed methods, statements, imports etc. If you only have to add code, then you could say your initial design is really extensible. If you have had to change some code, but only locally at specific locations, you might say your code was easy to change.
In order to quantify such observations, you will use the DiffJ tool to compute the difference between Part 1 and Part 2. DiffJ is similar to the common Unix utility diff, but has knowledge of the Java programming language. It is not line-based and ignores whitespace and comments. At the end of the course the results of all diffs will be aggregated and will be used for a qualitative discussion on the effect of design choices, if the results show interesting trends.
Again, DiffJ is a Java tool, so there are opportunities to automate much of the comparison.
NB It is of paramount importance that you use the Google Code Subversion repository from the very beginning of the course. If you cannot provide an accurate diff report, we cannot grade your solution.
First of all, we will provide sample Oberon-0 programs for smoke testing. If, upon grading, you fail to show a working run of the sample program, we will not grade your implementation.
We take the principles laid down in Code Complete as guidelines when grading your solutions. More specifically, the following aspects of quality code will be our focus:
When grading, we will use a check list that will be made available in due course. The collected metrics and the diff between Part 1 and Part 2 (see above) will not be used for grading purposes.
NB: high performance, fancy GUIs and other forms of gold plating will be bad for your grade, so you are advised not to waste time on those aspects.
The two parts of the lab assignments will be graded on-site at the following dates: