Earlier this year, in June, I turned in my phd dissertation at Copenhagen Business School (available here). After some months, the assessment committee sent a 5 pages review letting me know I could defend the dissertation.
I agree with most of the arguments in the assessment. But I have a hard time finding out how critical it is compared to other phd assessments. In case anyone else are lacking yard ticks to compare to, I publish the assessment in its entirety here.
Assessment of PhD thesis handed in by Thomas Høgenhaven
In August of 2013, Thomas Høgenhaven (TH) handed in his thesis at the Department of IT Management with the title
Open Government Communities. Does Design Affect Participation?
‘The aim of the thesis is to contribute to our knowledge of how to build sustainable on-line communities in the public/government sector. The purpose of this is to make governments and public sectors more collaborative, participatory, transparent, and technology driven. If successfully implemented, such open government initiatives can improve democracy, efficiency, and innovation’. The committee finds that this is a good characteristic of the aim of the thesis.
TH is applying social psychology theories for formulating a series of hypotheses. These are tested through four experiments on K10, a Danish open government community for people involved with one of two public benefit programs called “early retirement pension” and “flexjob”.
The effects of the experiments are rather mixed, but mostly negative, and it is not easy to obtain success of such an on-line community by altering the design. TH develops a framework, which could be useful understanding why participation in on- line communities succeed or fail. The framework could also be useful for further academic work in the area.
Evaluation of thesis structure and main lines of arguments
The thesis is well structured. The first chapter is an introduction outlining the field of enquiry, and it identifies the four main research questions and hypotheses for the four experiments. The second chapter is the literature study regarding participation in open government looking at criteria like efficiency, transparency, and government culture. Chapter three is the case description of K10, which is not government but privately owned. As such it does not represent the ideal ‘open government on-line forum’, but given that it is owned by a single person, it allows TH to carry out experiments, which would not have been possible at a ‘genuine’ government on-line community.
The fourth chapter discusses the theoretical foundation, where TH has chosen to use social psychology. This is not the most applied theories in IS research, but social psychology is one of the reference disciplines of IS. We will discuss this point below.
Chapter five has an excellent account of the ontology and epistemology for his thesis. It discusses in detail the nature of the specific types of experiments conducted by TH, which are characterized by taking place not in a laboratory but in a real life.
The chapters 6 – 9 are detailed descriptions of the four experiments based on the pre-formulated hypotheses and an analysis of the results.
Finally in chapter 10, TH presents a cross-analysis of the four experiments and discusses the implications for open government practices and for the field of Information Systems. Here he is also introducing his theoretical contribution called ‘Lean Experimentation’. Chapter 11 is concluding the thesis.
Overall the committee finds the thesis well-structured and it follows a natural line of arguments with some minor exceptions that will be discussed below.
Thesis results and contributions, strength and weaknesses
1. The positioning of the thesis in the field of research
TH writes himself in the preface that ‘Especially in terms of research design and methodology ….. this research is closer to the quantitative American Human Computer Interaction (HCI) tradition than it is to the qualitative Scandinavian Information Systems (IS) tradition’. The committee will agree to that.
But one thing is the research tradition employed; another is which community one is addressing. And here it is not entirely clear which research community TH is targeting: Sometimes it is stated this is for eGovernment, sometimes to IS, sometimes (maybe indirectly) to HCI or even social psychology communities. The committee finds that the positioning of the main contribution is not clearly enough defined.
Furthermore, TH’s approach is problematic in the way that he uses HCI literature to ground the argument in IS field. Following arguments of Jonathan Grudin, there are in fact three different schools of HCI: human computer interaction (origins from cognitive psychology), computer-human interaction (origins from social psychology and sociology), and HCI in IS (origins from management sciences). In the dissertation, the communities are mixed, and some very strong statements (e.g. that HCI is mainly qualitative or quantitative) are made. This makes it very difficult to build convincing arguments and make contributions, especially when it comes to who would benefit from the findings.
Another issue is the relationship between IS and HCI. In CBS, HCI is seen as a sub- discipline within IS, but it seems that TH see the two disciplines on same level (p 34 – 37). This can and is extensively debated in many situations, but we believe it is fair to say that IS has many more theories and insights.
2. The organization of the thesis
The thesis is well-structured and the different parts are positioned well in relationship to each other. The arguments follow a natural progression, and they are building up to reach the research results. The insights are well presented.
3. The use of extant literature
TH has conducted a very extensive literature review of more than 400 referenced publications. This certainly meets with expectations. Furthermore, the literature is in general well treated and he draws on many different bodies of literature.
But it can be argued that some of the literature review is slightly off the topic, and that some of the findings from the experiments are not too well connected to the literature. For example it would have been beneficial, if the literature contributing to understanding and arguing for how and where the experiments contribute in Lean experimentation process. This is superficially stated in the text (chapter 10.4) without much reference to literature.
Causal-relationships between the literature, the experiments, and the Lean experimentation process are not always evident. However, tn general the committee finds that the link between theory and hypotheses could have been made more explicit. It is sometimes up to the reader to make the connection. But by and large, the committee is not challenging the connections.
4. The choice of research questions
The choice of research questions follows naturally from the description of the problem-situation described in chapter one. The four research questions deals with the following issues
- Whether receiving social comparison information has an effect on subsequent participation in K10
- How goal setting affects participation in K10
- How knowledge of other users’ gratitude for previous contributions affect future contributions to K10
- How benefit of contribution affect subsequent behavior on K10
These research questions are certainly relevant and justified based on the literature discussed in the theory chapter
5. The justification of research methods
However, in spite of extensive efforts to define and relate constructs, several key concepts are unclear. What is a government community? What is participation in ‘open government’ definition (p. 52)? What is ‘open government partnership’ etc. Although key concepts are defined, some wording of the key concepts results in further questions. In particular what constitutes participation could have been better elaborated.
6. The validity of data
TH has at length discussed different validity constructs, and after careful examination, he defines and indeed used the three constructs of internal -, construct -, and external validity. This section is good.
The committee also finds that the data collected meets acceptable levels of validity by addressing the full community of K10. One of the samples is really too small for the statistical treatment, but we do not find that a major flaw.
7. The execution of the analyses
One critical issue about the analysis concerns the experiments, where all four experiments are carried out in the same on-line community K10. TH is using all types of experiments (see Donald T. Campbell), but the committee is not totally convinced that there has been ‘total’ control in all these live experiments or whether there might have been some interfering variables. The most important problem is that all experiments more or less failed (except self-efficacy which is obvious if someone has ever looked at webshops). Failing experiments is not necessarily a show-stopper, if it is caused by the respondents / context. But the committee is not convinced that the failures are only caused by the respondents / context and not by some poor research design. Some examples of these could be: (1) the effect of holiday period in experiment one (2) the self-selection bias or goal setting problems (wording could have been: how many more posts will you write) in experiment two, or (3) why categorizing described on p. 317 were not utilized. Fortunately, one could argue, these drawbacks are discussed critically in the appropriate chapters (6-9). Unfortunately they are utilized quite uncritically in the Discussion and Conclusion sections. This means that the value of Discussion and Conclusion is reduced as the earlier findings are considered and used as foundation without their limitations.
8. The robustness of the conclusions
In general, TH is showing a high level of competence in the conduct of the experiments following the research tradition of the Cornell eRule initiative. The committee finds that the experiments are well carried out with the reservations mentioned above.
The robustness of the results as regards an open government on-line community like K10 is high, but K10 is a very special community owned by a single individual and not a government entity. Accordingly, the results are not directly applicable to other government on-line community.
When looking at the specific results, unfortunately, the specific hypotheses about the (1) impact of social comparison information, (2) implications of goal setting for future participation and (3) effect of knowledge about other user’s gratitude for contributions e-mails were not confirmed. So in a way one could say that the hypotheses had been badly conceived or the experiments badly carried out. However, the committee will not go that far. There is still a substantial value in the thesis. TH has conducted an extensive literature search, developed a relevant set of hypotheses based on some theories of social psychology and tested them in a live experiment. This is a major undertaking. Based on this he is proposing a framework for what he calls ‘Lean experimentation’. The committee finds that this is a good contribution.
9. The clarity of presentation
The thesis is very long, with 411 pages plus references and appendices. One reason is that the text is occasionally repetitive (e.g. validity/reliability issues are discussed three times, basically with the same argumentation). Also, chapter 10.3 is not about implications to IS research but discussions about validity, so the same discussion comes back one more time.
It may also be argued that the writing tone is very positive towards egovernment. In some places it would have been better to see a more neutral tone as there is no need to convert the committee or the reader in general.
In spite of these objections, it is a very well-written thesis. It is easy to read and easy to follow the arguments.
Thomas Høgenhaven has submitted a substantial piece of research for his PhD thesis. He has adequately positioned his research in relation to current state of the art within eGovernment community, and he is drawing upon one of the reference disciplines of IS, HCI, and social psychology. In spite of the objections mentioned about, and given his area of interest, this seems like an appropriate choice.
He develops a relevant research framework and conducts four experiments on a live on-line community. Unfortunately, the vast majority of his hypotheses are not confirmed. In that way his contributions are then primarily limited to the extensive literature section and the framework of what he calls ‘Lean experimentation’. It is the opinion of the committee that he is contributing original insight in these areas.
Although the research results are not ground breaking and only to a very modest extent are contributing to our knowledge about increasing participation in open government communities, it is the opinion of the committee that Thomas Høgenhaven has demonstrated that he clearly has the skills as researcher. The vast majority of the reason why his research did not deliver the hypothesized results is not due to a faulty research design.
On this basis, the committee has decided to accept the thesis for an oral defense.