Tuesday, January 13, 2015

Split questionnaire design

I wrote about finding a paper yesterday which describes the split questionnaire design; there were several papers which I could have chosen but the one that I chose seemed to be fairly representative and understandable. Yesterday I read only the opening page; my eyes glazed over when the paper started discussing heavy statistical analyses.

Today I continued reading the paper: I skipped the theoretical introduction and moved on to the actual experiment which the authors performed. Serendipity strikes its head: they used a questionnaire consisting of 65 questions grouped into 9 blocks; I had 68 questions grouped into 9 blocks (or sections, as I called them). There are two possible algorithms for splitting the questionnaire: 'between block' designs and 'within block' designs.

As I understand the above, I have chosen a 'between block' design: each questionnaire contains complete blocks. A 'within block' design means that each questionnaire contains some questions from each block. As it happens, I was discussing this with the occupational psychologist this morning (prior to reading the paper): she suggested that I use what the authors call the 'within block' design. 

I prefer the 'between block' design as I achieve a higher degree of accuracy for each block: if there are eight questions in a block and each question has five answers, then I can expect a total ranging between 8 and 40. If there are only four questions, then the total can range between 4 and 20, which is obviously a 50% reduction in accuracy. If there is a block with only three questions (and I have such a block), then reducing the number of questions will seriously damage the accuracy.

On the other hand, the authors write: feelings of boredom [may] primarily [be] caused by repetition of the relatively similar questions within blocks that measure similar constructs. Boredom occurs less in the within-block SQD because there are less of these similar questions with each block.

Again, prior to reading the paper, I had become aware of something similar to this problem. I had assigned to one questionnaire the blocks about spreadsheet efficacy and learning style, and to the other, blocks about training, ownership and satisfaction. This was done very much ad hoc late last night but afterwards I began to have misgivings: one questionnaire has blocks which are strongly connected with Priority and the other has questions which are totally unconnected. Thus I have decided to revise the questionnaires so that one has the training, ownership and learning style blocks whereas the other has efficacy and satisfaction. I am also going to standardise the order of the blocks within the questionnaire, for my own benefit.

I did not see any mention in the paper of 'compulsory' (common to all) and 'optional' blocks within the questionnaire so this seems to be my personal contribution. 

My understanding has deepened after having read a few more papers on SQD. What I was calling 'compulsory blocks' are normally named 'core components' whereas the 'optional blocks' are non-core components. Apparently, a more normal design is that there exists one questionnaire which is composed of the core components and all the non-core components; each respondent is directed to answer the core components along with randomly assigned non-core components. My twist is to create two questionnaires, each consisting of core and non-core components, where the respondent has to answer all the questions. 

One might say that this will lead to problems when storing the data and then analysing them, but as I will be writing a dedicated program for storing the results and analysing them (or at least, outputting data which will be analysed by a statistical package), I'm not worried about this. I have a high level of database self-efficacy!

No comments: