Saturday, July 31, 2010

How things have changed

After writing about 'Nice work', I thought that I would check when I bought the book (or at least, first read it). I looked through some letters from around the right time period (the years following 1989), and found that the first reference to the book was in August 1991.

I had to look through 'hard copies' of letters; even though I was using computers to write letters from 1986 or thereabouts, I never saved them as computer files, preferring to save the printed versions instead. As a result, some of the letters are virtually unreadable as they were printed via ribbons whose final days had come. Considering that in those days, the letters would have been simple text files with negligible overheads, storage space must have been at an extreme premium, so much so not to store even a measly 1K. I remember that it was considered a huge win when I found a program which would add an extra sector to a floppy disc, increasing its storage size from 360KB to 410KB. Eventually, I would move to writing letters by email (and those are stored), but in order to do that, I needed that my correspondent also to have email, and that took some time.

Anyway, back to the books. One has to remember that 1991 was still pre-Internet and that one had to rely on book reviews or book clubs in order to learn about new books. It comes back to me that in those days I used to obtain a copy of 'Penguins in Print', a directory which listed all the books that excellent paperback house Penguin had in print and so were theoretically available. I used to go through that directory diligently and compile a list of ISBNs (ie book identity numbers) which would then be ordered when my parents or I went to Britain.

The books used to be so heavy and voluminous that we wouldn't bring them back in our suitcases. Instead, I would have to make parcels and sent them by post. One year, I sent two such parcels but only one of them arrived. For obvious reasons, I don't recall what was in the missing parcel, but I seem to remember that nothing precious was lost. In those days, of course, many books were bought 'on spec', so losing them did not necessarily mean that any book that I really wanted disappeared.

Today I received an email from Amazon, suggesting books that I might like to read. I clicked on an autobiographical book by musician Rick Wakeman and on a neuroscience book, and should I wish, those books will be with me in another week. The only reason that I didn't order is because I'm having problems activating a new credit card and I don't want payment problems with Amazon. How things have changed over the past twenty years.

Similarly with the emails - or anything computerised - if I want to find some text, I use a 'search' function and thousands of files are searched in a few seconds. No more leafing through old letters and having to read them all in order to find a nugget.

I came across a letter from early 1990 in which I was writing about my daughter: she had just turned two. I read the passage to her now, and it was very nostalgic for a moment.

Friday, July 30, 2010

Nice work

A little judicious work with the blogger search function shows that I have mentioned David Lodge in four different blog entries over the years but have never tagged him. That has now been corrected.

The first novel of Lodge's that I read was 'Changing places', which was recommended by some book club of which I was a member maybe twenty five years ago. The book was amusing but not overly so, and I don't think that I've read it for many a year. The second was the sequel to CP, "Small world", which I liked somewhat less than CP. Despite this, I found both books interesting enough to keep David Lodge on my search list, and this paid in spades when I found his next novel, "Nice work".

I know that CP was actually Lodge's fifth or sixth book to be published, but his first three novels were out of print in the eighties and nineties (I bought all three in new editions in 2002). But for me, it was his first book, and as far as I am concerned, there was a leap made in quality from the books which preceded it to those which came after. A similar thing happened with Peter Robinson (the first book of his which I read was his tenth to be published) and this makes me wonder whether it's the same thing as in music: the first song one hears by someone is always better than anything which they had made beforehand.

'Nice work' was a very suitable novel for me, because it brought together the disparate worlds of industry and academy. I straddle those worlds myself. Whilst the book itself can be read as a complete - and very interesting story - there is also an amusing subtext (or maybe supertext). One of the protagonists of the book, Robyn Penrose, is a University lecturer specialising in the 'industrial novel'; Lodge 'quotes' a lecture which she delivers to a class, and the ideas presented in the lecture form the background structure to the novel.

Indeed, there is a very important sentence uttered in the lecture whose significance I missed for several years. I had always thought that the ending of the book was rather weak; not quite deux ex machina, but very close (and incidentally, other books of Lodge also suffer from this same malady). But during one pass through the book, I noticed that the situation which suddenly arises at the end of the book - in which Robin is offered marriage, a job in America and receives a legacy following her uncle's death - is exactly what was expounded in her lecture.

I quote from page 83 of my edition of the book:
In short, all the Victorian novelist could offer as a solution to the problems of industrial capitalism were: a legacy, a marriage, emigration or death.

Once I noticed this, all sorts of other bits and pieces started to fall into place. In my opinion, 'Nice work' should have received better reviews that it does at Amazon.

My edition (Penguin 1989, bearing the number '10' alone) has on its cover a picture of Vic Wilcox and Robyn Penrose as portrayed in "a riveting BBC television series from the bestselling book". I wouldn't say that I've lusted after this BBC series, but I have always been interesting in finding it. It would seem that it has yet to be transferred to DVD.

Imagine my surprise and pleasure, then, a few weeks ago when I found a torrent of the series at a BBC torrent site. I eagerly downloaded the torrent and waited for it to complete. Over the past week, I have watched all four episodes of the series as well as rereading the book.

I have to give the series a very high mark. The screenplay was written by David Lodge himself and so is naturally very close to the original. Of course, not all of the book has been transferred to the screen (the daughter of Vic Wilcox has been excised, for example) but it is a very faithful translation. What is missing is the literary subtext; one is much less aware of the parallels to the Victorian industrial novel (even though most of Robyn's lecture is preserved), and the entire 'Silk Cut' deconstruction has disappeared. The telling of Robyn and Vic's stay in Dusseldorf is shown as it happened (well, most of it) as opposed to having it described by Robyn to Penny Black (is Lodge a stamp collector? Only now do I realise the significance of this name) in a sauna.

The casting is superb as well; most of the characters (save Robyn herself) look almost exactly as I imagined them. The actress playing Robyn doesn't quite look the part but is very good (and manages to put on an instant Birmingham accent when in Dusseldorf).

I definitely recommend looking out for this book - and if one is exceptionally lucky, the torrent.

Wednesday, July 28, 2010

The in-basket 5

Until now, I've been concentrating on writing about the exam program which a user will run. This program outputs a text file containing pertinent information about the exam - the user's name, date of birth, date of exam, exam name - as well as the values generated whilst operating the program - opening and closing of messages, message text, etc. Today I'm going to write a little about importing this text file into the database.

But first: I deleted the dll code from one of my computers before I realised that I had not backed up this code. So I had to spend a certain amount of time recreating the changes necessary to enable the exam program to read its data from a dll. Whilst doing so, I noticed that there were still a few little bits and pieces to correct or improve, so I corrected and/or improved. I also religiously backed up.

Adding the import code to the existing results program was fairly simple as I have already done this with five or six other programs. True, the import code has to be tailored to the format of the import file, but most of this is boilerplate code. I discovered that I had to add a new field to the 'exams' table in the database, in which the dll name of the exam is stored, but this was a minor incovenience. Otherwise, writing the import code was straightforward and it worked (almost) correctly the first time. The only thing which needed fixing was reading the final line of the import file, which showed when the user finished the exam.

I will be introducing the new version of the programs into the work environment in the next few days. I understand that a few people are being tested with it today, but hopefully there will be more in the near future. The OP is quite enthusiastic about this exam as it enables her and her staff to widen their horizons regarding their customers, allowing the OP to enter the field of human resource testing and head hunting.

Sunday, July 25, 2010

The in-basket 4

After walking around for a few days with my head in sky, playing with AI programs, I thought that it was time to return to Earth and improve the in-basket exam. At the moment, the exam is unique in that it accesses a database in order to find its data, whereas all the other exams which I have written, including the aptitude exam, have their data compiled into a resource file which is attached to the executable exam. Could I do the same with the in-basket?

At first, I thought that this would be not be possible, as there are all kinds of gotchas in this exam - rich text has to be displayed, replies have to be referenced, etc. But after a while, I realised that it definitely would be possible to convert the exam to reference a resource file and not a database. The rich text was actually simple to solve: the 'administrator' program saves each rich text entry in the database as an rtf file, and these are then loaded into the rc (resource source) file. Storing the people referenced in the exam turned out to be very simple: the program adds 10,000 to each person's id (so as not to clash with message ids), and in the stringtable are stored as separate strings the person's name, job and comments. The messages were also fairly easy to store.

Rewriting the exam to use a resource file was tedious but not too problematic. I fixed a few existing minor bugs which became apparent during this process, so this version is actually slightly better than the original. A reply to an email had to be stored in two places: on the one hand, the text of the reply and the reasons had to be stored in the output file (which will eventually be read into the database), whereas on the other hand, the text has to be kept in memory as it has to be rereferenced should the examinee decide to reply to one of his own emails. I solved this by defining a record type (which inherits from TObject) and storing the replies in a TList. Debugging was a bit awkward but eventually I got all the bugs out and even improved the program a little by displaying replies in a different colour.

But I had also lost something: I would need a separate exam program for every exam (ie one exe file runs the demonstration exam, one exe file runs the furniture exam, etc). Whilst this is not too annoying, it is somewhat impractical. The solution is to store each resource file in a dll; the program scans the list of dlls in the current directory and displays them in a combo box (on the form where the user enters her details). The dll chosen is then passed to the 'LoadLibrary' function, and all future resource references are made to the handle of the dll library. Maybe complicated to explain, but this didn't take very long to implement.

This is quite a step forward in Windows programming for me.

Friday, July 23, 2010

Porting the Amateur Reasoner/2

I didn't sleep too well last night, probably because I was very excited. Yesterday evening I was rummaging around old (paper) files which I had stored and I came across program listings for several versions of the Amateur Reasoner. I found the same listing that I downloaded a few days ago, but I also found later versions; the final version was v1.7.1 dated 1 May 1991.

In this version I see that I had addressed some of the problems in earlier versions: there is use of a generic list object (and all the pointer records are now objects), there is use of variable length strings, there is no hash table, tokens are concatenated making longer strings (and so the introduction text is a few strings instead of many tokens), and the tokeniser seems to have been rewritten and improved. I suppose that when I next get a block of free time, I'll make a Delphi port of this version. I found some correspondence regarding the program which stated that, as I had suspected, the program began on the PDP which had fixed length strings.

But wait! There's more: further looking through this file found various attempts at Prolog-like interpreters and query systems, all of which are less important to me today than what they were. YAPI (Yet Another Prolog Interpreter) was very interesting: I had almost all of the parts assembled including the recursive query solver and resolution (matching) although the unification part wasn't quite right. But the program was still missing the final key, which would allow the use of variables in rules.

I found  the complete documentation to an expert system called ESIE, which contained a knowledge base called 'animals' which looked extremely familiar. I obviously used this as the demonstration database for the Amateur Reasoner, although my program used a slightly different syntax, viz
(ESIE) goal is type.animal
legalanswers are yes no *
if backbone is yes then superphylum is backbone
if backbone is no then superphylum is jellyback
question backbone is "Does your animal have a backbone"?

(AR) goal (animal)
if backbone= yes then superphylum = backbone
if backbone = no then superphylum = jellyback
prompt (backbone) = Does your animal have a backbone?

Earlier versions of the AR required the 'legalvalues' (aka legalanswers) keyword, which later turned into 'values' which later became unnecessary as I realised that this information was redundant.

Right at the end of the file, I find a photocopy of an article which appeared in the April 1985 edition of BYTE magazine entitled "Inside an expert system: from index cards to Pascal program" written by Beverly A. Thompson and William A. Thompson - yes, the same authors who wrote the 'Very Tiny Prolog'! This article was accompanied by source code which I must have obtained from the magazine; I also found my port to PDP Pascal. Looking back on things now, this article must have made a very strong impression on me, although the Prolog interpreter would have made an even stronger impression, had I known about it at the time.

And as for the Prolog interpreter, whilst I couldn't find online the original articles referenced by the program, I was able to find the email of Bill Thompson. I wrote to him, saying how pleased I was to find this program, even if it was 25 years after the event (!) and asking whether he could send me an offprint of the two articles. Lo and behold, yesterday I received a reply, including an url where I could download them (thank you!). The first article was more or less an introduction to Prolog, including vague hints about how the language might be implemented in Pascal. This was the kind of material which I had read previously which had left me to fill in the blanks. The second article, though, was the real thing: explanations on how to implement Prolog. Linked lists, parser, solving queries by solving the head and then attaching the rest of the rule to the end of the queue of clauses to be solved, etc. The program also showed the missing link: how to handle variables in rules.

This key sentence actually appears in the first article, when Mr Thompson is writing about solving rules by making copies of them: The copy will be exactly the same as the original rule in the data base but all variables in the rule will be tagged by appending the recursion level to them. Maybe this sentence doesn't make too much sense on its own, but in the context of the article, it is gold.

Apart from the intrinsic values of these programs, it also gives me a clue what I was programming at the turn of the 90s. We moved from one kibbutz to another in September 1989, which is when I started working in the furniture factory. Shortly after, I began rewriting an application which someone had developed there which dealt with an online data collection system from the factory floor. This involved many new ideas and techniques with which I had not dealt before. None of that code survives.

At the time, I was coming to grips with Turbo Pascal 5.5, with its object orientated extensions, and then Turbo Pascal 6 with Turbo Vision - an almost fully fledged GUI for DOS. This latter system was incredibly difficult to grasp, not least for want of proper documentation and example programs, but mastering it made the transition to Windows very easy.

I remember writing a series of quasi database programs, in which the data was stored in a collection; at the beginning of the program, the data would be read via a stream into the collection, and at the end of the program the data would be written via a stream to disk. During the program's invocation, all the data was held in memory, which made accessing it very fast, although a program crash would lose all changes. I remember being very excited about Turbo (or was it Borland?) Pascal 7, which came with a memory manager which allowed DOS programs to use much more memory than before; such a database program could now hold 700+ records whereas previously it could only hold about 250.

Such concepts seem terribly quaint now.

Thursday, July 22, 2010

Blogging frequently means that I have too much time on my hands. At home, I'm "on holiday" from my MBA studies and so have at least ten hours free time a week (which I seem to be spending on watching 'Star Trek: The Next Generation' - we're now near the end of the second season), and at work the second half of a month always seems to carry a lighter load than the first half.

Here's a recipe for a dish which I've been cooking very successfully over the past few months. It's another recipe with a good "value for time invested" ratio; there's no beating roast chicken, but this one comes close. Let us say that the value of a roast chicken meal is 10; it takes me maybe fifteen minutes to prepare it, so the ratio would be 0.67 taste units per minute. Fancy cooking leaves me cold and takes a long time so such a ratio would be around 0.1. This casserole dish has a ratio of about 0.4.

Take a large casserole dish suitable for cooking over a flame and heat in it some oil. Dice a large onion and then fry it in the dish until it browns (but not caramelises). Add 500g minced beef; mix, fry and try to make the pieces of fried beef which inevitably result as small as possible. Once all the meat has a grey tinge, add vegetables which have previously been cut into cubes - potatoes, carrots, courgettes. The original recipe calls for a tin of baked beans to be added, but I've been using haricot beans and this variation improves the result. Add tomato paste and enough water to almost cover everything. Stir and heat until boiling, then reduce the heat and cover the dish. Leave to cook for two hours. Turn off the gas/electricity but leave the dish covered. Warm to serve.

Last Saturday I started cooking at around 8:30 by cubing the vegetables; by 9am the casserole dish was covered. I continued heating till about 11am, then turned off the gas. A quick reheat at 12am and then the food was ready.

Porting the Amateur Reasoner

I ported my Amateur Reasoner program, which I mentioned in my previous blog, to Delphi. Apart from a few specific points, this was a fairly simple process, and now that all the display code has been stripped from the program, it's much easier to understand the program source.

Before going any further, I should describe the program and what it does. First, I'll quote a very simple database which is dated 10 Feb 1989:

goal (rate)

if day = saturday then rate = cheap.
if hour = before_8:30 then rate = cheap.
if hour = after_21:00 then rate = cheap.
if day = sun_to_thurs & hour = 8:30_thru_13:00 then rate = expensive.
if day = sun_to_thurs & hour = 13:00_thru_21:00 then rate = middle.
if day = friday & hour = 8:30_thru_13:00 then rate = expensive.
if day = friday & hour = 13:00_thru_21:00 then rate = cheap.

prompt (day) = On which day was the call made?
prompt (hour) = At which hour was the call made?

The goal of the program is to 'calculate' or infer what the rate should be made for a telephone call according to its rules. In order to solve the goal, the program looks at the various rules which define the rate (this program is exceedingly simple, because it has a 'depth' of 1 - all rules directly refer to 'rate'). In order to infer what the rate is, the program has to determine on which day and at which hour the call was made - these are the prompts.

The first design change was to use variable length strings as opposed to fixed length strings. The only reason that I can think of why the original program used fixed length strings if it compiled with Turbo Pascal 5 is that the program must have originated on the PDP, whose Pascal had fixed length strings. Using variable length strings had slightly more consequences than I had considered: the program's parser, in common with all other programming language parsers, has to tokenise the input file, ie split the text into words and punctuation. The tokenisation process changed slightly after I redefined what a token was; this may or may not have been the cause of a bug which I found in the tokeniser procedure, which affected the parsing of a punctuation symbol at the end of a valid token.

In order to save memory, all the tokens were stored in a hash table and the data structures used the hash numbers instead of the actual tokens. The program had used a simple hash function which depended on the first three characters of the token, which is possible if the token has a fixed length but would return variable results if the token had less than three characters. As one keyword is 'if', it was clear that the hash function had to be replaced. Fortunately I discovered that Delphi has a THashedStringList class which solved all of my hashing problems in one go, so I immediately adopted this class.

I tested the parser initially on a 'database' called 'Actuary', which consisted of several rules and questions which would determine a person's life expectancy. The parser always choked on this file and I discovered eventually that it was using a keyword undefined in the parser. I then checked this database on the original program which also got stuck, so I realised that this extra keyword was in fact superfluous. When I removed all the offending statements, I ran the database through the program which worked fine. After thinking about it (and checking the dates on the files), I realised that this database had been intended for an earlier version of the Reasoner, and that later versions were able to establish for themselves the values needed (using the above example, the program is able to establish that valid values for 'day' are saturday, sun_to_thurs and friday).

The only other part of the program which gave me a few problems was where the program had to get input from the user - to establish what the values of 'day' and 'hour' should be. This is the part of the original program which was chock full of display code trying to emulate a GUI in a text only environment. Of course, all this code was irrelevant, but I had to be careful that I didn't 'throw the baby out with the bathwater'. As it happens, I did throw out a little too much code.... The part which determined which tokens to display on the screen (saturday, sun_to_thurs and friday) was straightforward, but what I missed the first time round was the value which would be returned after one of the tokens was chosen. I had passed the values to a radiobox and originally returned the index of the chosen token, which in this case would be 0, 1 or 2. No! The value to be returned should be the hash value of the chosen token. Whilst it occurs to me now that I could have recalculated the hash value, I already knew the hash value of each token as I accessed it when populating the radio group. Here is the Delphi screen


I remembered the little code trick which I presented in a column a few days ago about storing an integer along with a string value in a combobox. Here I used it with a radiogroup -
 ques:= curobj^.legal;   // this is a pointer to the list of values
 while ques <> nil do
  begin
   rg.Items.AddObject (hashstrings[ques^.n], TObject (ques^.n));
   ques:= ques^.next
  end;

 if showmodal = mrOK
  then with rg do
   result:= longint (items.Objects[itemindex]);

One neat thing which I added to the dialog box was the ability to see the program's reasoning. This existed in the original program too, so I wasn't inventing anything, but when I initially planned how to display this dialog, I thought that pressing on the 'do you want to see my reasoning' button would have brought up another dialog. In the end, I decided to extend the current dialog box downwards in order to reveal another memobox. Below is the screenshot of the same dialog box after the user has clicked on the 'do you want to see' button.


Now that I have the program and got it running with Delphi, what am I going to do with it? Probably nothing. The Amateur Reasoner was like Prolog without the variables, mainly because I couldn't figure out then how to implement the variables and attach values during a recursive process. I also note that in the 'flagship program of the Occupational Therapist', I did something similar, using rules stored in an sql database.

Wednesday, July 21, 2010

It's the 80s all over again

In the 1980s I was very interested in programming languages and artificial intelligence. I had begun programming on a PDP/11 which was situated about 20 km from where I lived at the time; we had a telephone line connecting the computer to a keyboard/printer, and when I wasn't accounting, I was programming. In 1986, if I remember correctly, I was one of the first people I knew to have a PC at home - some kind of IBM clone. This was to be the first of many computers that I would own. I programmed in Turbo Pascal although I as I say, I was interested in other languages and had several programming language diskettes.

It was the August 1985 edition of Byte magazine that captured my attention; this had a picture of a robot arm emerging from an egg on the cover. I began a subscription to the magazine and shortly learned about a fascinating language called Prolog. I bought a book about the language and had to suffice with this as I had no implementation. At the same time, Byte was running a series of articles by someone called Jonathan Amsterdam (again, if I remember correctly) who was writing about implementing a Modula-2 like language.

At the time I didn't know enough to know better and so I thought that I would be able to use some of Amsterdam's code and write an interpreter for Prolog, as the book I had made it seem simple. Little did I know. I never did succeed in writing a real pico-Prolog at the time, although in the early 2000s I found the source code to a pico-Prolog and finally implemented it in Delphi.

I did try my hand, though, at writing other kinds of inference engines and completed something called 'The Amateur Reasoner', which was distributed via a shareware library with whom I had contacts at the time. Someone in America sent me his knowledge base for determining which catalyst to use in which petroleum refining process, and I incorporated it along with the several other knowledge bases that I had cobbled together.

This program might well have been inspired by a book which I owned called 'Writing Expert Systems in Pascal" - or maybe it was in Basic. Another source of inspiration would have been an article in Byte called "Inside an expert system: from index cards to PASCAL program", about which I had completely forgotten until today.

In the last few months, the itch of trying to implement Lisp in Pascal has re-awakened in me; unfortunately I have the time, but I don't know what I intend to do with this even if I do succeed. As a way of relieving the itch, I often scan the Internet looking for suitable programs. Today I persevered more than I normally do and eventually stumbled upon a treasure trove. I immediately went to the AI languages section and found several interesting programs, including the source to a "very tiny Prolog" implementation in Pascal, written by Bill and Bev Thompson. They were the authors of the "Inside an expert system" article to which I referred in the previous paragraph.

The source code states that there was a pair of articles in "AI Expert" magazine which presumably explained how the program worked, but so far I have been unable to locate those magazines which were published 25 years ago. Whilst looking through the treasure trove site, in an attempt to see whether maybe the articles were there in another guise, I came across a program which seemed terribly familiar - yes, it was my Amateur Reasoner program!

I had to download this, primarily to see whether there was anything cringe-worthy there. Apart from a regrettable tendency to mix display code (including inline assembly and a few BIOS tricks) along with the inference engine code, it's actually not bad at all. The code is full of linked lists, which is what we had in the mid-80s, in the absence of anything of a higher order. I don't know whether I would re-implement this today with a series of types based on TObject and put them in lists or rather keep the low level code.

It's fascinating reading the code. I see how I kept the syntax as simple as possible so that I wouldn't need to include a recursive descent parser. I probably knew about such things then but did not necessarily understand them, which is one reason why my Prolog interpreter never got very far.

I accidentally double clicked on the executable file and I'm pleased to note that the program works fine in a DOS box under Windows. Maybe I should re-implement it in Delphi - only 21 years after it was originally written.

Tuesday, July 20, 2010

Alarm clock mp3 player

About 15 years ago I bought an alarm clock/radio/cd player combination. The idea was that the alarm would wake one with the soothing sound of a cd, and the combo worked very well for several years. Once I even had the machine repaired, replacing the cd drive. But after a few years, the cd drive stopped working again, and the radio never worked very well, so since then it's been on my bedside table, serving as a clock. My wife's mobile phone wakes us every morning.

Looking at the limited room on the surface of the table the other day, I realised that it was time to get rid of this combo: the space it was occupying was far out of proportion to its utility. Then it came to me in a flash: I need an alarm clock which doubles as an mp3 player. Instead of an alarm going off, the clock starts playing music. I looked and I looked, but couldn't find anything aside from two clocks which I found on Amazon: they appeared to fill my needs perfectly, but received terrible reviews - basically, they don't work.

How difficult can it be to build such a machine? Surely there must be a demand for such a gadget. So this is the first serious idea which I've ever had for a start-up company: making alarm clock mp3 players. The time was ripe for doing so ten years ago, so the technology required is hardly cutting edge.

In the mean time, I've settled on a traveler's alarm clock, which might not have mp3 capability but doesn't take up much room.

Sunday, July 18, 2010

The in-basket 3 (whole lotta programming)

Another Friday has come and gone, meaning that I'd had another session with the OP regarding the Inbasket exam. I wrote to her during the week saying that the exam was about 80% finished and was in a testable state. Even so, our meeting resulted in requests for several changes: some of these were minor and some were a bit more than minor, but I've finished them all.

We decided that instant messages (IMs) have to be handled "immediately". This means in programming terms that the IMs have to be displayed in a modal dialog box; all the other forms appearing on the screen are non-modal. This wasn't a problem and the program ran fine after the change. I wanted to see what would happen when the clock runs out and there is a modal dialog box still displayed on the screen; the program terminated but a peculiar error message appeared about a query. It took me a while to figure out where this was coming from.

Every second there is a "timer interrupt"; this increments the seconds count and decrements the 'seconds left till the end of the exam' count. Should the seconds count be evenly divisible by 60, then a minute has passed; at this stage, the program checks to see whether there are any emails to be displayed and whether there are any IMs to be displayed. It so happens that my test exam has two IMs; when checking to see what would happen if the user does not handle the IM, I let my program run idly. The first IM popped up on time, the second didn't and when the program finished (the test exam runs only for five minutes - it saves time), there was the error message. Eventually the penny dropped: when the timer interrupt occurred and a minute had passed, then the program tried to display a modal dialog. But if there were a modal dialog already being displayed, there would be a problem as there can't be two simultaneously active modal dialogs in the same program. It became clear that all I needed was a boolean variable to guard the modal part of the timer loop; when the modal dialog begins to execute, the variable is set to true and when the dialog finishes, the variable is set to false. The program checks this guard variable before executing the modal dialog, and of course if the variable is true then the modal dialog is skipped. Now that I think of it, if a modal dialog is skipped, then it will never appear because its time will have passed. Hmmmm.

Once this problem was fixed, I told the OP that she could now test the program. Of course,  she runs 'her' exam and discovers a bug - an IM isn't appearing. I run 'my' exam and the IMs do appear. I check the query which pulls the IM out of the database in order to be displayed on the screen - perfectly fine. In the real exam, the IM appears after five minutes, which I spent idly fiddling about, only to discover that it wasn't appearing. I decided to save my time and so changed the IM's entrance time to one minute; lo and behold, the IM appears. At this stage, the problem became clear: as it happens, in the 'real' exam there is an IM and an ordinary message which are programmed to appear after five minutes. The code necessary to retrieve the ordinary message and display it was taking more than a second, so the IM code (which I thought I had fixed as described in the previous paragraph) wasn't executing. All that was needed was to turn off the timer before displaying the email and turning it on again immediately afterwards.

Both these problems remind me of programming TSRs in the long-forgotten DOS age. A better solution that my ad hoc fixes would be to use two different timers; one measures seconds and will be responsible for updating the 'time left' display whereas the other measures minutes and will be responsible for displaying messages. This will obviate the need for turning the timer off and on. I still have to figure out what to do if a user takes a long time to handle an IM and in the mean time a new IM is supposed to appear. Presumably I will have to store each undisplayed IM's id number in a queue and handle that queue.

I'm sure that this is much more complicated than the OP imagined and maybe it won't be necessary as the user will be interested in handling the IMs instead of ignoring them, because otherwise she won't be able to complete the exam.

I have to note the fact that writing this exam has certainly varied my programming diet and made me face challenges which either I have never faced before or haven't faced for a long time. This is what makes programming so much fun. I mean, when was the last time that  I had to use a queue in a program?

I want to pass on one little tip. In my programs, I often load a listbox with values taken from a database table (let's say the name field) whilst simultaneously storing the value's id field. This can be done as follows:

 with qQuery do
  begin
   open;
   while not eof do
    begin
     n:= lb.items.add (fieldbyname ('name').asstring);
     sendmessage (lb.handle, lb_setitemdata, n, fieldbyname ('id').asinteger);
     next
    end;
   close;
  end;

The id number is then retrieved thus:
 id:= sendmessage (lb.handle, lb_getitemdata, lb.itemindex, 0);

The key to this code is using the pair of Windows message, lb_setitemdata and lb_getitemdata. Despite the similarities between a listbox and a combobox, it transpires that there are no equivalent messages (like cb_setitemdata). So what's a jobbing programmer to do? Until recently I was forced to issue a database query in order to retrieve an id for a given name, but now I've found a much better solution:

comboxbox.items.clear;
 with qQuery do
  begin
   open;
   while not eof do
    begin
     combobox.items.AddObject (fieldbyname ('name').asstring,
                                                       tobject (fieldbyname ('id').asinteger);
     next
   end;
  close
end;

Accessing the id now becomes
with combobox do id:= longint (Items.Objects[Items.IndexOf(text)]);

I think that it should be possible to replace the 'indexof (text)' with 'itemindex'. The 'AddObject' method allows one to attach an object to each element in the 'items' array; a long integer takes up the same amount of storage as an object, so it can be stored by casting the integer as an object, and retrieved by casting the object as a longint.

Sunday, July 11, 2010

Poland and the Holocaust

My son is in Poland at the moment, doing what might be called 'The Holocaust Tour'. For ten-fifteen years, the Holocaust has been a standard part of the Israeli school curriculum in history, and groups generally tour Poland whilst in the eleventh grade. During the year, they learn about the Holocaust (they also learnt about the Nazi party, the putsch, Crystalnacht, the Molotov/Ribbentrop agreement, the Wannsee conference at al.) and had extra-curricular meetings about Poland in particular. As part of their preparation, they visited the Yad VaShem Holocaust Memorial Museum in Jerusalem which is a harrowing experience. I have been there three times (about once every thirteen years), and every time gets harder.

We managed to speak to him on the phone last night and he seemed cheerful enough. Today is probably the highlight - or lowlight - of the trip: they visit Auschwitz. We offered to phone tonight, but he declined, saying that he would be watching the final of the World Cup. This demonstrates the emotional elasticity of youth - from Auschwitz to football in a few hours.

I was surprised to come across a description, slightly fictionalised, of a visit to Auschwitz in the book "Deaf Sentence: A Novel" by David Lodge. I once found an excellent interview with Lodge about the book, in which he confesses that the final two events (the visit to Poland and the father's death) occurred in real life but in the opposite order. Unfortunately, I lost the reference to that interview. It is interesting to read his comments, considering that Lodge was born Catholic and so not raised in the same cultural stew that I and my son were.

In 1976, my youth movement held what was called an 'ideological seminar' in Ilford, London. This time we concentrated on the Holocaust and even met with a few Holocaust survivors living in the vicinity. At the time, we tended to have the impression that very few people survived, but this seems not to have been the case.

Friday, July 09, 2010

The in-basket 2/A

Whilst reading my previous post, it became clear to me how the duplicate email had been created. There was a letter in the inbox; I double clicked on it in order to open the letter and see its contents. At a later stage, I double clicked on the letter in the inbox again, thus opening the letter a second time. This means that I shall have to add some code to check when double clicking on a letter whether it is currently open; if the letter is open, then the focus should move to this form. As the forms are anonymous (see the excrutiating detail in the previous post), this is going to be more than difficult.

I am reminded of Robert Silverberg's humourous tale of time travel, 'Up the line', when a slightly careless Time Courier manages to duplicate himself.....

The in-basket 2

Last week, after developing what I thought was a very neat solution, I presented my work-in-progress, aka the In-Basket exam, to the occupational psychologist (OP). Whilst she was impressed, it transpires that the direction that I had taken was not the one that she wanted. She insisted on having an interface more reminiscent of an email program, with an inbox and an outbox, where letters move from one to the other. It also became clear that I would need a table of 'dramatis personae', data about the 'people' who sent letters to the examinee to be dealt with.

On Saturday I worked for several hours on the program, getting it to a fairly reasonable state. On Sunday, I worked on the 'reply' section, which was fairly difficult. Unfortunately, demands of the day job and the July heat imposed themselves on me and so I couldn't continue with the work until yesterday evening, when I completed the 'reply' code (it's not exactly how it should be, but it's good enough for now).

This morning I presented the new, improved program once again to the OP and this time got the seal of approval. Even so, there were a myriad of small changes needed, along with a major change: instead of using listviews with which I displayed the messages, I decided to be traditional and used dbgrids. This simplifies certain aspects of the code and makes it more dependable.

I've just finished about four hours of work on the program, in which I seem to have rewritten about 50% of the code. In order to 'celebrate', I entered into the database an exam in which the examinee receives about 15 emails and has to decide in which order to execute the various tasks. The exam program as written isn't totally conducive to this exam; there's no simple way to show the execution order and the emails don't have much content. Our exam is different from others which I have seen in that the examinee has to provide reasons for the actions which he took. I utilised this in order to provide the timetable for my actions.

This specific exam didn't utilise any of the 'reply' code (ie reactions to 'emails' which have already been sent) so I couldn't exercise this part of the code. Otherwise the exam program behaved very well, except for the strange fact that one email appeared twice. I'm not sure how this came about and I'm not sure how I'll debug this.

Whilst developing the program and utilising the 'instant message' feature, I noticed that an instant message was appearing twice. It took me a long time to figure out where this was coming from - as I'm writing what is effectively a real-time program with MDI forms, it's not always clear who is doing what. Eventually I unravelled the mystery: the main program has a timer which fires every minute sending a message to the inbox form, telling it to load any new messages. In MDI programs, the child forms are generally anonymous, so one has to write code like this
for i:= 0 to mdichildcount - 1 do
 mdichild[i].close
When one wants a specific form to execute something - for example, the inbox and outbox forms are unique and can receive messages, whereas the email forms are non-unique and don't have messages - one has to use a typecast. First one checks whether the child form is of the required type and then a typecast is used. I have seen similar code which does not use a typecast, instead using the 'as' keyword; at this stage, the type of the child is known and so it is perfectly safe to use a typecast, which requires less code and is faster than the 'as' approach.

My code:
for i:= 0 to mdichildcount - 1 do
 if mdichild[i] is TSomething
  then TSomething (mdichild[i]).DoSomething;
'As' code;
for i:= 0 to mdichildcount - 1 do
 if mdichild[i] is TSomething
  then with mdichild[i] as TSomething
   DoSomething;
In the specific case of the instant messages, the IM was displayed as a new MDI child form and unfortunately the newest child form becomes mdichild[0], whereas the form which was previously first in the array now becomes second. The timer code was
for i:= 0 to mdichildcount - 1 do
 if mdichild is TInbox
  then TInbox (mdichild).openmessage (minutes);
At the time of execution, mdichild[0] was the inbox and mdichild[1] was the outbox. The first iteration sent a message to the inbox and told it to open any extant messages. When it was the IM's turn, it appeared on the screen. At this stage, mdichild[0] is now the IM, mdichild[1] is the inbox and mdichild[2] is the outbox. On the second iteration of the timer loop, it was looking at mdichild[1] - which is still the inbox! So the IM display code got called again, resulting in two IMs on he screen. The solution was fairly simple:
for i:= mdichildcount - 1 downto 0 do
 if mdichild is TInbox
  then TInbox (mdichild).openmessage (minutes);
Tomorrow I'm going to write the program which displays the user's results. The main report will be in the form of a diary:
00:10 mins: message #1 opened
00:15 mins: message #2 opened
00:20 mins: message #3 opened
00:30 mins: message #3 answered
text of message #3
user's answer
user's reason
00:40 mins: message #1 closed
...etc.

In the test exam, in which all the emails are extant at the beginning of the exam and they have to be replied to in an optimum order, there is no option but to open all the emails at the beginning of the program and read them before one is able to formulate a plan. As it happens, I knew that there were a few emails which could be left unread at first, but I imagine that most examinees won't know this.