It might not be the fanciest template in the world – but in case you might need a template for Michael Porter’s Value Chain Template for Google Drive, you can use this one:
Google provides examples of good and bad websites to it’s quality raters. There are a lot of different examples. Here is a coded version of traits of highly rated content
Credibility, reputation & expertise
Professionalism & Quality
Earlier this year, in June, I turned in my phd dissertation at Copenhagen Business School (available here). After some months, the assessment committee sent a 5 pages review letting me know I could defend the dissertation.
I agree with most of the arguments in the assessment. But I have a hard time finding out how critical it is compared to other phd assessments. In case anyone else are lacking yard ticks to compare to, I publish the assessment in its entirety here.
Assessment of PhD thesis handed in by Thomas Høgenhaven
In August of 2013, Thomas Høgenhaven (TH) handed in his thesis at the Department of IT Management with the title
Open Government Communities. Does Design Affect Participation?
‘The aim of the thesis is to contribute to our knowledge of how to build sustainable on-line communities in the public/government sector. The purpose of this is to make governments and public sectors more collaborative, participatory, transparent, and technology driven. If successfully implemented, such open government initiatives can improve democracy, efficiency, and innovation’. The committee finds that this is a good characteristic of the aim of the thesis.
TH is applying social psychology theories for formulating a series of hypotheses. These are tested through four experiments on K10, a Danish open government community for people involved with one of two public benefit programs called “early retirement pension” and “flexjob”.
The effects of the experiments are rather mixed, but mostly negative, and it is not easy to obtain success of such an on-line community by altering the design. TH develops a framework, which could be useful understanding why participation in on- line communities succeed or fail. The framework could also be useful for further academic work in the area.
The thesis is well structured. The first chapter is an introduction outlining the field of enquiry, and it identifies the four main research questions and hypotheses for the four experiments. The second chapter is the literature study regarding participation in open government looking at criteria like efficiency, transparency, and government culture. Chapter three is the case description of K10, which is not government but privately owned. As such it does not represent the ideal ‘open government on-line forum’, but given that it is owned by a single person, it allows TH to carry out experiments, which would not have been possible at a ‘genuine’ government on-line community.
The fourth chapter discusses the theoretical foundation, where TH has chosen to use social psychology. This is not the most applied theories in IS research, but social psychology is one of the reference disciplines of IS. We will discuss this point below.
Chapter five has an excellent account of the ontology and epistemology for his thesis. It discusses in detail the nature of the specific types of experiments conducted by TH, which are characterized by taking place not in a laboratory but in a real life.
The chapters 6 – 9 are detailed descriptions of the four experiments based on the pre-formulated hypotheses and an analysis of the results.
Finally in chapter 10, TH presents a cross-analysis of the four experiments and discusses the implications for open government practices and for the field of Information Systems. Here he is also introducing his theoretical contribution called ‘Lean Experimentation’. Chapter 11 is concluding the thesis.
Overall the committee finds the thesis well-structured and it follows a natural line of arguments with some minor exceptions that will be discussed below.
Thesis results and contributions, strength and weaknesses
TH writes himself in the preface that ‘Especially in terms of research design and methodology ….. this research is closer to the quantitative American Human Computer Interaction (HCI) tradition than it is to the qualitative Scandinavian Information Systems (IS) tradition’. The committee will agree to that.
But one thing is the research tradition employed; another is which community one is addressing. And here it is not entirely clear which research community TH is targeting: Sometimes it is stated this is for eGovernment, sometimes to IS, sometimes (maybe indirectly) to HCI or even social psychology communities. The committee finds that the positioning of the main contribution is not clearly enough defined.
Furthermore, TH’s approach is problematic in the way that he uses HCI literature to ground the argument in IS field. Following arguments of Jonathan Grudin, there are in fact three different schools of HCI: human computer interaction (origins from cognitive psychology), computer-human interaction (origins from social psychology and sociology), and HCI in IS (origins from management sciences). In the dissertation, the communities are mixed, and some very strong statements (e.g. that HCI is mainly qualitative or quantitative) are made. This makes it very difficult to build convincing arguments and make contributions, especially when it comes to who would benefit from the findings.
Another issue is the relationship between IS and HCI. In CBS, HCI is seen as a sub- discipline within IS, but it seems that TH see the two disciplines on same level (p 34 – 37). This can and is extensively debated in many situations, but we believe it is fair to say that IS has many more theories and insights.
The thesis is well-structured and the different parts are positioned well in relationship to each other. The arguments follow a natural progression, and they are building up to reach the research results. The insights are well presented.
TH has conducted a very extensive literature review of more than 400 referenced publications. This certainly meets with expectations. Furthermore, the literature is in general well treated and he draws on many different bodies of literature.
But it can be argued that some of the literature review is slightly off the topic, and that some of the findings from the experiments are not too well connected to the literature. For example it would have been beneficial, if the literature contributing to understanding and arguing for how and where the experiments contribute in Lean experimentation process. This is superficially stated in the text (chapter 10.4) without much reference to literature.
Causal-relationships between the literature, the experiments, and the Lean experimentation process are not always evident. However, tn general the committee finds that the link between theory and hypotheses could have been made more explicit. It is sometimes up to the reader to make the connection. But by and large, the committee is not challenging the connections.
The choice of research questions follows naturally from the description of the problem-situation described in chapter one. The four research questions deals with the following issues
These research questions are certainly relevant and justified based on the literature discussed in the theory chapter
However, in spite of extensive efforts to define and relate constructs, several key concepts are unclear. What is a government community? What is participation in ‘open government’ definition (p. 52)? What is ‘open government partnership’ etc. Although key concepts are defined, some wording of the key concepts results in further questions. In particular what constitutes participation could have been better elaborated.
TH has at length discussed different validity constructs, and after careful examination, he defines and indeed used the three constructs of internal -, construct -, and external validity. This section is good.
The committee also finds that the data collected meets acceptable levels of validity by addressing the full community of K10. One of the samples is really too small for the statistical treatment, but we do not find that a major flaw.
One critical issue about the analysis concerns the experiments, where all four experiments are carried out in the same on-line community K10. TH is using all types of experiments (see Donald T. Campbell), but the committee is not totally convinced that there has been ‘total’ control in all these live experiments or whether there might have been some interfering variables. The most important problem is that all experiments more or less failed (except self-efficacy which is obvious if someone has ever looked at webshops). Failing experiments is not necessarily a show-stopper, if it is caused by the respondents / context. But the committee is not convinced that the failures are only caused by the respondents / context and not by some poor research design. Some examples of these could be: (1) the effect of holiday period in experiment one (2) the self-selection bias or goal setting problems (wording could have been: how many more posts will you write) in experiment two, or (3) why categorizing described on p. 317 were not utilized. Fortunately, one could argue, these drawbacks are discussed critically in the appropriate chapters (6-9). Unfortunately they are utilized quite uncritically in the Discussion and Conclusion sections. This means that the value of Discussion and Conclusion is reduced as the earlier findings are considered and used as foundation without their limitations.
In general, TH is showing a high level of competence in the conduct of the experiments following the research tradition of the Cornell eRule initiative. The committee finds that the experiments are well carried out with the reservations mentioned above.
The robustness of the results as regards an open government on-line community like K10 is high, but K10 is a very special community owned by a single individual and not a government entity. Accordingly, the results are not directly applicable to other government on-line community.
When looking at the specific results, unfortunately, the specific hypotheses about the (1) impact of social comparison information, (2) implications of goal setting for future participation and (3) effect of knowledge about other user’s gratitude for contributions e-mails were not confirmed. So in a way one could say that the hypotheses had been badly conceived or the experiments badly carried out. However, the committee will not go that far. There is still a substantial value in the thesis. TH has conducted an extensive literature search, developed a relevant set of hypotheses based on some theories of social psychology and tested them in a live experiment. This is a major undertaking. Based on this he is proposing a framework for what he calls ‘Lean experimentation’. The committee finds that this is a good contribution.
The thesis is very long, with 411 pages plus references and appendices. One reason is that the text is occasionally repetitive (e.g. validity/reliability issues are discussed three times, basically with the same argumentation). Also, chapter 10.3 is not about implications to IS research but discussions about validity, so the same discussion comes back one more time.
It may also be argued that the writing tone is very positive towards egovernment. In some places it would have been better to see a more neutral tone as there is no need to convert the committee or the reader in general.
In spite of these objections, it is a very well-written thesis. It is easy to read and easy to follow the arguments.
Thomas Høgenhaven has submitted a substantial piece of research for his PhD thesis. He has adequately positioned his research in relation to current state of the art within eGovernment community, and he is drawing upon one of the reference disciplines of IS, HCI, and social psychology. In spite of the objections mentioned about, and given his area of interest, this seems like an appropriate choice.
He develops a relevant research framework and conducts four experiments on a live on-line community. Unfortunately, the vast majority of his hypotheses are not confirmed. In that way his contributions are then primarily limited to the extensive literature section and the framework of what he calls ‘Lean experimentation’. It is the opinion of the committee that he is contributing original insight in these areas.
Although the research results are not ground breaking and only to a very modest extent are contributing to our knowledge about increasing participation in open government communities, it is the opinion of the committee that Thomas Høgenhaven has demonstrated that he clearly has the skills as researcher. The vast majority of the reason why his research did not deliver the hypothesized results is not due to a faulty research design.
On this basis, the committee has decided to accept the thesis for an oral defense.
I recently purchased Alistar Croll & Benjamin Yoskovitz‘ book Lean Analytics. It’s a highly recommended read to basically anyone in the tech industry. One thing this book does very well is to describe other models and how the lean analytics approach relates to them (this is required to do in research but unfortunately a rare sight in more mainstream literature). The book focuses on various startup types and stages, and describe which metrics are relevant to whom at what time. This post comprises what I find to be the most useful insights.
The two authors note that business models and marketing models are seen as substitutes when they are not: “Freemium isn’t a business model – it’s a marketing tactic” (p. 67). It is thus important to make proper distinctions between acquisition channel, selling tactic, revenue source, product type, and delivery model. These five things can be compiled in many different ways, so you should use them as a flipbook to build the right combination in your startup, see Figure 7.1 below:
Building on Dave McClure’s AAARR metrics (or the so called pirate metrics), users can add values in 5 ways:
You should think about getting the users to do as many of these things as possible.
As entrepreneurs, we all need small lies. That creates reality distortion fields, necessary to pursue new ideas. But our lies need to be counterbalanced by data. More importantly, lies don’t help us learn. We need data and analytics to do this. But don’t get lost in the data. “Analytics is about tracking the metrics that are critical to your business.” (p. 9). This means there are no universal set of right analytics. As Avinash have been saying for years: They are highly context dependent. This makes it harder to write about, but this book does a pretty good job by distinguishing between different types of startups (although this list isn’t exhaustive): e-commerce, SaaS, mobile apps, media sites, UGC sites, and two-sided market places.
In the early phases of the startup, you should define the One Metric That Matters (OMTM). This metric should be the one that is most crucial to find out if you are on the right way toward stardom: i.e. number of signups, retention etc. The OMTM will help guide all decisions and experiments in the startup as the ultimate goal is to improve this one metric. This metric will change over time. Good metrics share these four traits:
In order to grow your brand, it is relevant to focus on one of Eric Ries’ three engines of growth:
Or seen another way: Coca-Cola CMO Sergio Zyman has defined marketing as “selling more stuff to more people more often for more money more efficiently” (p. 64). This means business growth can come from five different types of more:
Read more about lean analytics on the book website.
Growth companies usually face a significant problem: they need to scale operations while ensuring the output quality is not deteriorating. The standard way to do this is to have managers (or somebody else) control the output and make sure it is good enough. But this solution is suboptimal in so many ways: few people like to do this kind of work; few people like to be tediously controlled; and it is quite expensive.
A better way to do this is to design the tasks better in the first place and make sure everyone knows the overall vision and values as well as the intention of a given task. In his book, Start with why, Simon Sinek tells a great story about the two different ways of building a company:
There is a wonderful story of a group of American car executives who went to Japan to see a Japanese assembly line. At the end of the line, the doors were put on the hinges, the same as in America. But something was missing. In the United States, a line worker would take a rubber mallet and tap the edges of the door to ensure that it fit perfectly. In Japan, that job didn’t seem to exist. Confused, the American auto executives asked at what point they made sure the door fit perfectly. Their Japanese guide looked at them and smiled sheepishly. “We make sure it fits when we design it.” In the Japanese auto plant, they didn’t examine the problem and accumulate data to figure out the best solution—they engineered the outcome they wanted from the beginning. If they didn’t achieve their desired outcome, they understood it was because of a decision they made at the start of the process.
At the end of the day, the doors on the American-made and Japanese-made cars appeared to fit when each rolled off the assembly line. Except the Japanese didn’t need to employ someone to hammer doors, nor did they need to buy any mallets. More importantly, the Japanese doors are likely to last longer and maybe even be more structurally sound in an accident. All this for no other reason than they ensured the pieces fit from the start.
What the American automakers did with their rubber mallets is a metaphor for how so many people and organizations lead. When faced with a result that doesn’t go according to plan, a series of perfectly effective short-term tactics are used until the desired outcome is achieved. But how structurally sound are those solutions? So many organizations function in a world of tangible goals and the mallets to achieve them. The ones that achieve more, the ones that get more out of fewer people and fewer resources, the ones with an outsized amount of influence, however, build products and companies and even recruit people that all fit based on the original intention. Even though the outcome may look the same, great leaders understand the value in the things we cannot see.
Every instruction we give, every course of action we set, every result we desire, starts with the same thing: a decision. There are those who decide to manipulate the door to fit to achieve the desired result and there are those who start from somewhere very different. Though both courses of action may yield similar shortterm results, it is what we can’t see that makes long-term success more predictable for only one. The one that understood why the doors need to fit by design and not by default.
In 1986 Robert Cooper released his seminal work Winning At New Products. The (rather comprehensive) book’s most important contribution is seven principles to successfully build and launch new products. The research design is the same as Jim Collins uses in Good To Great: comparing successful companies with non-successful equivalent ones. Although this design is often critiqued, it’s arguably more valid than the high number of rather arbitrary case studies used in much contemporary innovation literature.
The 7 principles are as relevant today as they were in 1986. So they deserve more attention:
Slow content aims to slow down users, focus their attention, and help them act deliberately. Slow content isn’t right for every brand. But it is a great long term strategy for thoughtful, engaging brands.
Amit Singhal is the Senior VP of Search at Google. In this session, Guy Kawasaki is interviewing him.
I often use hotels.com when booking hotels (I like hipmunk way better, but I can’t book directly through them – and I really don’t like Orbitz’ UX). Upon booking a hotel, I need an email confirmation with the booking. Makes it easy to retrieve the reservation upon reservation. My reliance on some hotels.com emails means I cannot systematically report hotels.com emails for spam, as I rely on these emails. But noone would abuse this reliance on some emails to send other emails without unsubscribe buttons, right?
Think again! Hotels.com keeps sending two write review about your stay emails per stay. And here’s the catch: I cannot unsubscribe to these review emails. I totally get why hotels.com want these reviews, but this does not justify the lack of unsubscribe options.