Thursday, October 30, 2014

Another good idea

Just by chance, I had two very good ideas regarding Priority on Monday. It doesn't normally work like that so don't read too much into the coincidence of dates. Although on the other hand, my brain must have been highly activated that day which allowed me to see solutions which I had previously missed.

We have a business unit which is ... problematic, to say the least. Their method of working is unique and their workers aren't exactly the sort of people who take naturally to an ERP program. I know that I've mentioned this business unit before, but at the moment, I can't find any of those references (The second half of this post is about them).

This business unit derives most of its income from creating industrial floors. A building will have some kind of floor, but for industry (and also for basketball courts), a special floor covering is required. These coverings are created by mixing together various plastics, lacquers and similar products then layering them on the existing surface. I don't pretend to understand this and I don't really need to understand it. As far as I am concerned, they are taking raw materials and turning them into a finished product. The problems are that this process takes place at the customer's site and that there are no fixed quantities for a square metre of floor.

Several months ago, I overhauled their business processes as executed within Priority in an attempt to make things as simple as possible for them whilst at the same time allowing all the data to be recorded as if they were working in a more standard manner. As mentioned in the previous paragraph, 'production' takes place at the customer's site. This means that raw materials have to leave our warehouse and be transported; this fact has to be represented in the inventory table. The amount of raw material sent may not correspond to the amount needed for the current month; in other words, if a project is going to straddle months, there may not be a correspondence between the month in which the inventory is sent and the month in which the work is performed. The work can also be performed over several months.

Priority allows inventory to be marked with a status; the most normal status is 'goods', but there exists the possibility of marking the inventory with a status which is equivalent to the number of the customer. This way, we can tell how much inventory has been delivered to each customer's site. This seemed to be sufficient, but a few days ago, a request was made to differentiate between projects: apparently one customer has two separate but concurrent projects and it's not possible to differentiate what inventory belongs to which project.

This had me stumped for a while, but then I remembered that Priority allows one to define 'locations' or 'sub warehouses' within a warehouse. We don't use this capability which is probably why it didn't spring to mind. But once I considered 'locations', I realised that this was the answer. We have defined one (virtual) warehouse which holds all the inventory which has been delivered to customers' sites; the inventory is differentiated within this warehouse by its status. I added a little twist to this: each inventory movement will now be against not only this warehouse but also against a specific location (which naturally is the project's number). Thus we can always know how much inventory has been delivered for a specific project.

Actually implementing this was slightly more complicated than I expected, although a great deal of the implementation time was actually spent of fixing a bug which I had unwittingly introduced about a month ago.

[SO: 3663; 2, 15, 35
MPP: 574; 0, 1, 6]

Wednesday, October 29, 2014

Importing purchase orders

For the past few days, I've been wondering what I was doing a year ago. I have read my blog entries for that period but they don't really provide a good understanding. In order to remedy this for the coming years, I have decided to record some of my daily activities, in a more detailed way.

On Sunday (a workday here in Israel), I had a discussion with someone at work. She is charged with entering into Priority a customer's purchase orders; this customer can have more than ten orders a day, so this work takes up a fair amount of time. She wanted to know whether I could create an automatic interface for these orders. As this customer resides in the same database space as we do (a story in itself) and uses the same part numbers as we do (for obvious reasons), I can develop a program which will export their purchase orders to a file, then develop another program to import the data from that file. Due to permissions, this interface has to be separated into two (in other words, I do not want that this customer has access to our database and they don't want that we have access to their database).

I started work on this idea on Sunday afternoon and completed a first version on Monday morning. The export part was not problematic although it did include a new twist: a prior interface which I wrote like this exported comma delimited files and here I wanted a tab delimited file, but this was easily overcome.

I have been aware for some time that Priority has the capability to read tab delimited files into what is termed a 'load table', but I had never done this myself. I looked at an example and discovered what I needed to do. Actually this example led me slightly astray: the file contains two different types of line (headers and details) and I got the impression from the example that I could define two separate 'maps' for reading the input file, depending on the line discriminator. I tried this once and immediately saw the problem as strange data was placed into the wrong fields in the load table. I then redefined the 'map' (and redefined the output format) and this time the data was read from the external file into the load table correctly.

I then wrote a program which would take the data from the load table and import it into the customer orders table; I've done this sort of thing several times in the past so I had no problems. As a result, by Monday lunchtime I had a complete 'suite': a program which exports purchase orders into a file and a program which reads that file and creates customer orders. Each file consisted of one purchase order only.

Yesterday during my evening walk, I had many useful thoughts about this 'suite': primarily, I wanted to extend it so that the file could contain multiple purchase orders. This would mean that the person running the export program would do this once a day and the person running the import program would also do this once a day: this would reduce the overhead to a minimum. I also considered how the first person would receive feedback from the second person, specifically showing what the customer order number was for each purchase order.

This morning I started work on those ideas and discovered that whilst the ideas themselves were good, almost all the implementation details were wrong! Adding multiple purchase orders to the file and reading them were not problematic, but I discovered that the conversion of the data from the load table into the customer orders table was over-complicated. After simplifying this stage, I was able to load all the purchase orders for one day in a minute!

The program includes sending a report by email, showing the results of the interface. I have done similar things in the past, but the report produced has generally depended on one datum (for example, an order number) which had been stored in a special location, whereas here I wanted to send a report containing multiple data (order numbers). The solution had come during yesterday evening's walk - link the report to the load table (the interface which loads the data into the customer orders table updates the table with certain critical data). This idea worked perfectly.

But I also discovered that I had thrown a little part of the baby out with the bath-water: I had no means of checking whether a purchase order had already been converted into a customer order (something which existed in the original one order version). It took a while to figure out a solution for this, but of course I did. At the moment, I'm not convinced that I have chosen the best (or the recommended) method of doing this, but at least it works. This reminds me again of Alan Turing not performing a literature search. 

Hopefully this new system will go into operation on Sunday.



On a different subject, I received late last night the first response from my supervisor regarding my literature survey; I only saw the letter this morning. The letter contains only highlights; detailed comments will be in the draft which he is sending my by courier. This afternoon I will write a non-specific response. At least this activity is moving again, after having been on hold for the past ten days.

Tuesday, October 28, 2014

Alan Turing

The other day, I started rereading Andrew Hodges' biography of Alan Turing, which I must have bought at least twenty years ago. This came about because last week I was reading Doug Hofstader's "I am a strange loop", which is about Gödel numbers (as much as it is about anything), and I remembered that Turing studied Gödel's work before publishing his famous 'On computable numbers' paper. Once started on Turing's biography, I then became interested in his short period at Princeton, which is when he completed his doctorate (now do you see the connection?). One of Turing's problems was that he didn't perform a literature survey: it happened at least twice that he independently discovered something that someone else had already discovered. Had he been more diligent, this wouldn't have happened. The book opines though that had he indeed performed the literature survey, he might possibly have not achieved the insights which he did in fact obtain. 

The fact that Turing often started from first principles, whilst ignoring prior work in the same area, paid back handsomely during his work on the Enigma: his fresh view allowed approaches to decoding the material which had never occurred to the Poles in their prior work on the Enigma, pre-World War II. To be fair to the Poles, they never saw Enigma in full use by a complete military machine, so some of the flaws would have been hidden from them.

This morning, I discover that Benedict Cumberbatch ('Sherlock', 'Tinker Tailor', etc) is portraying Turing in a film made of the biography: The Imitation Game.  I can hardly wait to see this film, although I do wonder how Turing's thinking (which is a lonely occupation at best) is going to be portrayed. 

If I recall correctly, Turing had already left the Enigma project by the time that Robert Harris' book "Enigma" takes place. The wiki states that "[Tom] Jericho is a doctoral student of the mathematician Alan Turing at a Cambridge college" before the war, although in real life, Turing never had doctoral students (he occasionally tutored undergraduates but was never a doctoral supervisor himself). Turing moved on when the decoding work on Enigma became 'industrial' and less needy of wizardly breakthroughs; he went on to work on building 'Delila', an invention far ahead of its time, which allowed digitisation of speech (or as we would call it now, both an A/D encoder and D/A decoder).

Then, of course, there was the computer, whose origins lie in his pre-war experience at building a differential analyser from gear wheels. Unfortunately, the British bureaucracy - which had been avoided to a certain extent during the war years - returned with a vengeance, and Turing never realised/was never allowed to achieve his full potential.

Friday, October 24, 2014

User resistance

There was a nice confluence of reality and theory the other day. During the week, I had read a few papers on 'user resistance' which I want to include in my literature survey. UR is inversely related to 'perceived usefulness' (or 'perceived utility'), which, as the literature puts it, is "a surrogate for system success". In other words, if a user perceives a function to be useful, then the user will use this function and system success will be enhanced; if the user perceives that the function is not useful (or by adding overhead, even detrimental), then the user's resistance will grow and system success will not be enhanced.

In the real world, someone wanted that some metadata regarding customers (the customer's business sector) become compulsory; when a new customer card is opened, the operation can not be completed without defining a business sector. As this field has not been compulsory until now (and in fact, there had been very few values defined), no one had ever bothered filling it in. Entering a value into this field - especially when the customer is being defined and the information not necessarily at hand - is going to be perceived as irrelevant overhead. Overhead = no perceived usefulness = increased user resistance.

The person who wants this field to be compulsory is going to have to write a letter to all the users who define customers, explaining what the informational value of this field will be (how it will be used in the future) and why it is important that 'real' values be entered (because one defense mechanism which people have developed in order to handle compulsory fields is to enter any value - but necessarily the correct one). Hopefully, we will be able to raise the level of perceived usefulness and thus improve system success.

With regard to the literature survey: the completed document is 93 pages long and contains about 41K words. It turns out that the entire thesis is limited to 80K words, so some serious editing will need to be done on the survey. I know that I included far too many direct quotes from papers, so it should not be too difficult to elide the extraneous material. My supervisor is on the job.

Note new values for Stack Exchange and Musical Practice and Performance below!
[SE: 3628; 2, 15, 35
MPP: 574; 0, 1, 6]

Monday, October 20, 2014

The seach for serendipity

During the final days of working on the first draft of my DBA literature review (that is, last Friday), I found two papers which initially seemed interesting but of doubtful relevance. One [1], a scholarly paper on the subject of serendipity, was downloaded just for the fun of reading the paper. The authors describe serendipity as "making discoveries by accident and sagacity of things which one is not on quest of” [is there not a redundant 'of' in that definition?]. This subject has absolutely nothing to do with my research, but it's interesting, as I've noticed that frequently good ideas seem to come from random events.

The second paper is entitled "Cognitive style factors affecting database query performance" [2], and of course was intended for the section on cognitive style. The paper discusses how cognitive style affects the accuracy of SQL statements; an example appears on page 260
SELECT ITEMB.ITEMNO, RECEIPTSB.RECREPNO
FROM ITEMSB, RECEIPTSB
WHERE ITEMSB.ITEMNO = RECEIPTSB.ITEMNO
AND RECEIPTSB.PAYDATE < (RECEIPTSB.RECDATE + (0.5 * RECEIPTSB.TERMDAYS));
This isn't quite the syntax which the authors use as they neglected to use table identifiers for certain fields. As written, one can see that this statement uses implicit joins (otherwise known as SQL-89 syntax) as opposed to the clearer explicit join (aka SQL-92 syntax); see here for further discussion of this topic. Looking at it now, there is no need whatsoever to include the ITEMSB table in the query. But I digress.

At first, I thought that I would write a few lines about the paper, stating that Priority hides SQL from the end users ("I wouldn't know an SQL statement even if it bit me in the finger") thus awarding the paper a very low significance, but it suddenly struck me that if I ignore the SQL part and show how the research described in the paper examines how people solve database problems at work ("show all the clients who didn't purchase anything in 2014"), then the paper has high relevance. I wrote about this before, when someone needed data about products which could only be retrieved by accessing three different screens. The cognitive style will play an important part in how the data is retrieved and how accurate it is. Of course, I wrote a program/developed a report (using SQL) to do the work, so that users will be able to access accurate data whatever their cognitive style. In the end, I wrote nearly two pages about this paper which suddenly became extremely relevant.

I see that my writing style has been strongly influenced by the literature review; I even include references.



[1] Foster A.E. and Ellis D., (2014), "Serendipity and its study", Journal of Documentation, Vol. 70(6), 1015 - 1038
[2] Bowen, P. L., Ferguson, C. B., Lehmann, T. H. and Rohde, F. H. (2003): "Cognitive style factors affecting database query performance", International Journal of Accounting Information Systems, 4(4), 251-273.

Sunday, October 19, 2014

Literature review: first draft completed

I wrote ten days ago that "I've been using this time to progress on the literature review, and indeed my aim is to complete a draft by the end of this holiday period which I can then send to my supervisor". Yesterday was the end of that holiday period, and at exactly 12:15pm, I finished reviewing the final paper which I wanted to include in the review.

The review opens a general, explanatory section which covers topics such as ERP history, future (mobile devices), production strategies, misfits and training. Some papers are quoted and some are reviewed critically; it's difficult to assign a count for the number of papers which were reviewed for this section, but at least 30 papers were referenced.

Hereon, the picture is clearer. There are 7 previous literature surveys reviewed, 24 ERP implementation case studies, 6 papers on computer self-efficacy, 9 on end user computer satisfaction, 7 on cognitive fit, 3 on user ownership, 4 on perceived organisational support and 9 on cognitive style - a total of 63 papers. The whole review weighs in at 93 pages. Obviously, I wasn't trying to be brief.

Today I'll send it off to my supervisor then wait a week for him to wade through it and produce his feedback. I would have preferred feedback after finishing each section, so that I could correct any systematic errors, but he wanted to receive a completed object. It's clear to me that this is only a first draft; there are probably too many direct quotes and the supervisor may feel that there are sections which need bolstering, especially in what might be termed 'management impact'. There are references to this scattered around, but maybe they need to be collected in one place.

Now I need a week's holiday in order to recuperate. For fun, I printed a paper which I found on serendipity; I've only read the beginning of it but it seems interesting.

I note that several cited papers came from one journal, Computers in Human Behaviour. I was thinking that maybe I should try reading this journal on as each issue is published - just for fun!

Tuesday, October 14, 2014

Some days you're the pigeon and some days you're the statue

Amongst Dilbert's words of wisdom can be found the aphorism: Accept that some days you're the pigeon and some days you're the statue.

Two days ago, I was the pigeon. I wrote up seven or eight papers on 'user satisfaction' for my doctoral literature review, in the course of which I noted that there seem to be two different scales of 'user satisfaction', one of which is more pertinent than the other. These papers included one written by academics from the Ben Gurion University, here in Israel, which would be extremely relevant if more data were presented. Although I found the paper via Google Scholar, no reference is given which means that I don't know the paper's publication year (and neither will the reference in the bibliography be accurate). I sent an email to the professor who is one of the co-authors, who passed it on to the first author.

I received two replies, one from the first author and one from the professor. The first author explained that the paper was pre-publication - which explains the lack of a reference and so gives a publication year (2014, at least). She is currently abroad but will contact me (or I will contact her) in another few weeks and has promised to share some of her data. The professor too is abroad; she asked whether I was "looking for a position in Israel". This question can be understood in several ways; at the moment, I don't think that I am interested in having an academic post in a university, but who knows how I will feel in another two years.

I also wrote up the canonical paper about cognitive fit, which is described thus "Since humans are limited information processors, more effective problem solving will result when the complexity in the task environment is reduced. In this paper, the notion is developed that complexity in the task environment will be effectively reduced when the problem-solving aids (tools, techniques and/or problem representations) support the task strategies (methods or processes) required to perform that task. This notion is termed cognitive fit. Problem solving with cognitive fit results in increased problem-solving efficiency and effectiveness" (Vessey, 1991).

I have already found a few more papers on cognitive fit, but one seemed to be written about cognitive fit in software development, which isn't very relevant. The second paper is about the effect of cognitive fit on decision making which is very relevant; I started reading it, but I was too tired to finish. I did note that important quotes came from another paper, which I found and printed. Reviewing this paper ("Visual representation: implications for decision making" - Lurie and Mason, 2007) is the major job for today and I await it with anticipation.

Yesterday, though, I was definitely the statue. Although theoretically I am on holiday for a week and a half, I had been asked early last week if I would come to Karmiel on Monday for a meeting about a wood cutting optimisation program that we are trying to implement. No problem, I replied. After traveling for three hours (two hours by train and then another hour by car), I arrived for a two and a half hour meeting which can only be described as acrimonious, advancing our understanding of the program by perhaps one millimeter. Almost every participant in the meeting (we were five) became indignant or angry with at least one other participant. I developed a headache on the drive back from Karmiel to the railway station, which only intensified during the train rides home. As it is holiday time, the trains were packed and some people do not know how to speak quietly on their mobile phones. I should have worn my ear protectors - there is a reason why they are always in my pocket! Even after I came home, rehydrated and showered, I couldn't get rid of the headache and so went to bed at about 8:30pm, thankful that the day was over but wishing that it had never been.

The satellite television company will be broadcasting the film "Ender's game" on Friday, so I read the book (and some of its parallel version, "Ender's shadow") during the train rides. I first read the book many years ago so of course was familiar with it. Even without having read anything about the film, one wonders how such a book - which is strongly dependent on Ender's thoughts as well as null gravity - could be translated into a film. The criticism is mixed, so I don't have any expectations. The above mentioned paper (Lurie and Mason) puts this as "differences between expectations and the delivered product are likely to be lower, thus increasing post-purchase satisfaction".

Enough blogging - it's time to read about visual representation of data.

Friday, October 10, 2014

Continuing to watch the weight

Friday morning is weighing in time. As opposed to last week, when I was surprised that my weight had dropped to 78.4kg, this morning I was pleased to see that my weight had dropped even more, down to 77.7kg. As far as I can see, there are two contributing factors:
  1. Last Friday night/Saturday was the Yom Kippur fast. We did have a meal - spaghetti and cheese - at 5pm on Friday afternoon, but this might well provide fewer calories than my normal Friday evening meal. I chose spaghetti several years ago for this meal as it breaks down slowly and provides energy for a long time. This is a tip which I picked up from basketball players. We also had something to eat after the fast - innumerable cups of tea, yoghurt and a slice of bread with tuna - but again, this is comparable to a normal weekday evening meal. In other words, I skipped breakfast and lunch that day, thus saving a large number of calories. No snacking, either.
  2. Since Wednesday, I've been at home on holiday. This means that I can get up whenever I like (and most mornings that's still 5:30am) and I don't have to be at work at 6:50am. I've been using this time to take the dog for a long walk - this morning I walked 2.63 km in just under half an hour, burning 180 kCal. When combined with my evening power walk (about 3.8km in 35 mins, around 300 kCal), this extra exercise makes a huge contribution.
I can keep up the long morning walks for another week; it will be interesting to see what happens when I return to work.



Over the past few weeks, I've been watching another excellent British tv series, titled 'Last Tango to Halifax', whose basic premise (a pair of pensioners meeting via Facebook after having been separated for sixty years) is based on the real life experience of the show's writer. Unfortunately, I missed the opening episode; there were some scenes 'from previous episodes' which filled me in on what happened, but of course, these are no replacement for the real thing. Presumably I'll catch the opening episode when the series is inevitably shown again. 

Yesterday I saw the final episode of season one; I am hoping that season two will be screened without delay. As far as I have been able to ascertain, a third season has been commissioned but has yet to be filmed.

Thursday, October 09, 2014

Literature review: getting down to business

I see that I last wrote on this topic just over two weeks ago. There was a four day holiday for the Jewish New Year and yesterday I started a ten day holiday for the 'Festival of Booths' (as it is quaintly called in English). I've been using this time to progress on the literature review, and indeed my aim is to complete a draft by the end of this holiday period which I can then send to my supervisor.

I was fairly bogged down at the start with the first section, ERP history and future, which has developed into a grab-bag of subjects. This has been written more in the style of the research proposal with many direct quotes from papers, but there is also a certain amount of discussion and criticism.

The second section deals with previous literature surveys: the research proposal mentioned two such surveys but ignored a third which I had printed but decided not to use. For the literature review, I greatly extended my coverage of the first two surveys (adding critical comments), included the third then found two more surveys which I read, included and reviewed.

The meat of the literature review is the third section devoted to case studies of ERP implementations. At first, this started very slowly but once I got going, I was completing one or two reviews a day, as well as finding more material for future review. This section was completed (at least, for the time being) yesterday with several reviews added: it now comprises reviews of 23 different papers along with a long conclusions section. There was actually a 24th paper, but I decided not to include this as it wasn't very recent and held virtually no relevant information.

I have now embarked on the fourth section of the review, which will discuss psychological factors. This is going to be problematic for me as I imagine that the papers will generally be more theoretical than practical. To get started, I reviewed two PhD theses about self-efficacy which I had already mentioned in the research proposal.

When I wrote the research proposal, I generally read only the introductions to all the papers and ignored the methodology; this time around, I am ignoring the introductions and reading closely the methodology sections. Thus one of the above theses was fairly easy to review as it discussed an experiment held in a university, whereas the second was very hard as it was based on interviews held with employees and was structured in a manner not amenable for extracting facts about self-efficacy.

I have printed an article on self efficacy which was published earlier this year; I'm going to read this shortly and hopefully will review it today. I have just found another recently published paper on self-efficacy which I will print tomorrow then review it. I find it very difficult to read papers when they are displayed on the screen: it's much easier to read printed copy, allowing one to go back and forth within the paper. On the other hand, I save everything as it's easy to find a passage, copy it from the original then paste it into my work (always acknowledging the source!).

I received an intriguing letter from the university a week ago: someone in Israel wants to enroll in the DBA programme and asked whether there were other students here. I contacted him and tried to explain as much as possible, trying also to gauge what his motives are. The doctorate is a long and lonely path which requires time and commitment, and I wanted him to make sure that he had the necessary time and commitment. I know that he sent off his application form the other day but I doubt that I'll be hearing from him for a few months until the new semester starts. It will be nice to mentor someone.

[MPP: 524; 0, 1, 6]

Friday, October 03, 2014

Watching the weight (once again)

I weigh myself every Friday morning after I get up. Over the past few weeks, my weight seems to be constantly fluctuating: one week, I've lost 400 grams, the next week I've added a kilo, and so on. This has been mildly depressing as I seem to be generally gaining weight, despite the generous amount of exercise that I've done over the past few weeks and the little that I've eaten.

This morning I was very happy when I saw that my weight had dropped to 78.4 kg, which is the lowest it's been in years. But my happiness was tempered by the perplexing discovery that apparently I had lost over a kilogram in the past week - perplexing since I have stopped swimming and I didn't walk three nights in the past week. While walking the dog, I considered the possible reasons for these fluctuations. My first thought was that I was misreading the digital scale - possible with the way my eyesight is at the moment; I could be confusing a 9 with an 8.

But when I got home, I decided to try an experiment: I weighed myself in one room then I weighed myself in another room. The weight should be the same, no? No! There was a 600 gram difference between the two measurements! How can this be? The room with the lower measurement is part of the original building, whereas the higher measurement was made in the room which was added as an extension ten years ago; this extension is not actually connected to the original building (there are subsidence problems which are due to the house being built on a hill). I suspect that all the lower measurements were made with the scale in the original building, and the higher measurements in the extension.

Now being aware of this, I am always going to weigh myself in the original building. From my work as an analytical chemist, I know that scales should not be moved - but those scales are laboratory scales with a different order of sensitivity: they can measure milligrams but certainly not kilograms (probably not more than 10 grams). But the general idea still holds: always weigh oneself in the same place, at the same time. As far as my weight is concerned, the absolute measurement is not too important; it's the change from week to week that counts. In other words, it doesn't matter too much whether I weigh 78.4 kg or 80.0 kg at the time of measurement; it's more important to know that during the week I lost 600 grams (or whatever).

The digital scale in the clinic is supposed to be the most accurate, but the only way to get a true reading would be after a blood test (prior to which I wouldn't have eaten or drunk anything for twelve hours) and naked. The first condition is easy to obtain, but the latter is getting more difficult as the days become colder and I wear more clothes.