Me, in 2012. Image credit: Lasse Lundberg Andreasen

In case you need me…

If you need me, you can find me on Facebook, Twitter or LinkedIn. Or you can just send me an email at my mail.
It's a gmail starting with hogenhaven.

I'm always interested in drinking a cup of coffee.

Thomas Høgenhaven

Traits of High Quality Content According to Google Quality Raters

Posted on July 14, 2014

Google provides examples of good and bad websites to it’s quality raters. There are a lot of different examples. Here is a coded version of traits of highly rated content


  • “There are Ads, but it does not interfere with the MC on the page.”
  • “Ads are clearly labeled as Ads.”
  • Transparency
  • “login functionality, as well as clear information about what the user is logging into”


  • “The calculator is functional and easy to use.”
  • “The page design allows users to easily find the recipe.”
  • “The tabs should be considered part of the MC.”

Credibility, reputation & expertise 

  • “an example of everyday expertise. “
  • “documented her extensive experimentation “
  • “ Since the page is on the official… website, it is highly authoritative.”
  • “It’s Wikipedia article tells us…”
  • “ In addition to the article itself, there are many helpful references and citations to support the content.”
  • “This is a blog post on a newspaper which has won over 100 Pulitzer Prize awards. “
  • “It has a positive reputation”
  • “has a positive reputation, though it is not an acknowledged expert in… ”


  • “The website is one of the most popular recipe websites.”
  • “with more than 6.5 million views (and counting)”
  • “some of them have user reviews”

Professionalism & Quality

  • “This is a high quality, professionally produced video”
  • Interaction & User generated content
  • “ There is a lot of discussion and interaction between forum members”


  • “While the 2007 copyright date is outdated, most of the pages, including this one, have recent updates from 2014.”


  • “detailed Customer Service information on the site”


PhD Thesis Assessment

Earlier this year, in June, I turned in my phd dissertation at Copenhagen Business School (available here). After some months, the assessment committee sent a 5 pages review letting me know I could defend the dissertation.

I agree with most of the arguments in the assessment. But I have a hard time finding out how critical it is compared to other phd assessments. In case anyone else are lacking yard ticks to compare to, I publish the assessment in its entirety here.


Assessment of PhD thesis handed in by Thomas Høgenhaven


In August of 2013, Thomas Høgenhaven (TH) handed in his thesis at the Department of IT Management with the title

Open Government Communities. Does Design Affect Participation?

‘The aim of the thesis is to contribute to our knowledge of how to build sustainable on-line communities in the public/government sector. The purpose of this is to make governments and public sectors more collaborative, participatory, transparent, and technology driven. If successfully implemented, such open government initiatives can improve democracy, efficiency, and innovation’. The committee finds that this is a good characteristic of the aim of the thesis.

TH is applying social psychology theories for formulating a series of hypotheses. These are tested through four experiments on K10, a Danish open government community for people involved with one of two public benefit programs called “early retirement pension” and “flexjob”.

The effects of the experiments are rather mixed, but mostly negative, and it is not easy to obtain success of such an on-line community by altering the design. TH develops a framework, which could be useful understanding why participation in on- line communities succeed or fail. The framework could also be useful for further academic work in the area.

Evaluation of thesis structure and main lines of arguments

The thesis is well structured. The first chapter is an introduction outlining the field of enquiry, and it identifies the four main research questions and hypotheses for the four experiments. The second chapter is the literature study regarding participation in open government looking at criteria like efficiency, transparency, and government culture. Chapter three is the case description of K10, which is not government but privately owned. As such it does not represent the ideal ‘open government on-line forum’, but given that it is owned by a single person, it allows TH to carry out experiments, which would not have been possible at a ‘genuine’ government on-line community.

The fourth chapter discusses the theoretical foundation, where TH has chosen to use social psychology. This is not the most applied theories in IS research, but social psychology is one of the reference disciplines of IS. We will discuss this point below.

Chapter five has an excellent account of the ontology and epistemology for his thesis. It discusses in detail the nature of the specific types of experiments conducted by TH, which are characterized by taking place not in a laboratory but in a real life.

The chapters 6 – 9 are detailed descriptions of the four experiments based on the pre-formulated hypotheses and an analysis of the results.

Finally in chapter 10, TH presents a cross-analysis of the four experiments and discusses the implications for open government practices and for the field of Information Systems. Here he is also introducing his theoretical contribution called ‘Lean Experimentation’. Chapter 11 is concluding the thesis.

Overall the committee finds the thesis well-structured and it follows a natural line of arguments with some minor exceptions that will be discussed below.

Thesis results and contributions, strength and weaknesses

1. The positioning of the thesis in the field of research

TH writes himself in the preface that ‘Especially in terms of research design and methodology ….. this research is closer to the quantitative American Human Computer Interaction (HCI) tradition than it is to the qualitative Scandinavian Information Systems (IS) tradition’. The committee will agree to that.

But one thing is the research tradition employed; another is which community one is addressing. And here it is not entirely clear which research community TH is targeting: Sometimes it is stated this is for eGovernment, sometimes to IS, sometimes (maybe indirectly) to HCI or even social psychology communities. The committee finds that the positioning of the main contribution is not clearly enough defined.

Furthermore, TH’s approach is problematic in the way that he uses HCI literature to ground the argument in IS field. Following arguments of Jonathan Grudin, there are in fact three different schools of HCI: human computer interaction (origins from cognitive psychology), computer-human interaction (origins from social psychology and sociology), and HCI in IS (origins from management sciences). In the dissertation, the communities are mixed, and some very strong statements (e.g. that HCI is mainly qualitative or quantitative) are made. This makes it very difficult to build convincing arguments and make contributions, especially when it comes to who would benefit from the findings.

Another issue is the relationship between IS and HCI. In CBS, HCI is seen as a sub- discipline within IS, but it seems that TH see the two disciplines on same level (p 34 – 37). This can and is extensively debated in many situations, but we believe it is fair to say that IS has many more theories and insights.

2. The organization of the thesis

The thesis is well-structured and the different parts are positioned well in relationship to each other. The arguments follow a natural progression, and they are building up to reach the research results. The insights are well presented.

3. The use of extant literature

TH has conducted a very extensive literature review of more than 400 referenced publications. This certainly meets with expectations. Furthermore, the literature is in general well treated and he draws on many different bodies of literature.

But it can be argued that some of the literature review is slightly off the topic, and that some of the findings from the experiments are not too well connected to the literature. For example it would have been beneficial, if the literature contributing to understanding and arguing for how and where the experiments contribute in Lean experimentation process. This is superficially stated in the text (chapter 10.4) without much reference to literature.

Causal-relationships between the literature, the experiments, and the Lean experimentation process are not always evident. However, tn general the committee finds that the link between theory and hypotheses could have been made more explicit. It is sometimes up to the reader to make the connection. But by and large, the committee is not challenging the connections.

4. The choice of research questions

The choice of research questions follows naturally from the description of the problem-situation described in chapter one. The four research questions deals with the following issues

  • Whether receiving social comparison information has an effect on subsequent participation in K10
  • How goal setting affects participation in K10
  • How knowledge of other users’ gratitude for previous contributions affect future contributions to K10
  • How benefit of contribution affect subsequent behavior on K10

These research questions are certainly relevant and justified based on the literature discussed in the theory chapter

5. The justification of research methods

In general the choice of research methods is well argued and relevant for dealing with the research questions. TH has been strongly influenced by his stay at Cornel University in 2012. The committee finds that the justifications for his so-called ‘true experiments’ are well justified.

However, in spite of extensive efforts to define and relate constructs, several key concepts are unclear. What is a government community? What is participation in ‘open government’ definition (p. 52)? What is ‘open government partnership’ etc. Although key concepts are defined, some wording of the key concepts results in further questions. In particular what constitutes participation could have been better elaborated.

6. The validity of data

TH has at length discussed different validity constructs, and after careful examination, he defines and indeed used the three constructs of internal -, construct -, and external validity. This section is good.

The committee also finds that the data collected meets acceptable levels of validity by addressing the full community of K10. One of the samples is really too small for the statistical treatment, but we do not find that a major flaw.

7. The execution of the analyses

One critical issue about the analysis concerns the experiments, where all four experiments are carried out in the same on-line community K10. TH is using all types of experiments (see Donald T. Campbell), but the committee is not totally convinced that there has been ‘total’ control in all these live experiments or whether there might have been some interfering variables. The most important problem is that all experiments more or less failed (except self-efficacy which is obvious if someone has ever looked at webshops). Failing experiments is not necessarily a show-stopper, if it is caused by the respondents / context. But the committee is not convinced that the failures are only caused by the respondents / context and not by some poor research design. Some examples of these could be: (1) the effect of holiday period in experiment one (2) the self-selection bias or goal setting problems (wording could have been: how many more posts will you write) in experiment two, or (3) why categorizing described on p. 317 were not utilized. Fortunately, one could argue, these drawbacks are discussed critically in the appropriate chapters (6-9). Unfortunately they are utilized quite uncritically in the Discussion and Conclusion sections. This means that the value of Discussion and Conclusion is reduced as the earlier findings are considered and used as foundation without their limitations.

8. The robustness of the conclusions

In general, TH is showing a high level of competence in the conduct of the experiments following the research tradition of the Cornell eRule initiative. The committee finds that the experiments are well carried out with the reservations mentioned above.

The robustness of the results as regards an open government on-line community like K10 is high, but K10 is a very special community owned by a single individual and not a government entity. Accordingly, the results are not directly applicable to other government on-line community.

When looking at the specific results, unfortunately, the specific hypotheses about the (1) impact of social comparison information, (2) implications of goal setting for future participation and (3) effect of knowledge about other user’s gratitude for contributions e-mails were not confirmed. So in a way one could say that the hypotheses had been badly conceived or the experiments badly carried out. However, the committee will not go that far. There is still a substantial value in the thesis. TH has conducted an extensive literature search, developed a relevant set of hypotheses based on some theories of social psychology and tested them in a live experiment. This is a major undertaking. Based on this he is proposing a framework for what he calls ‘Lean experimentation’. The committee finds that this is a good contribution.

9. The clarity of presentation

The thesis is very long, with 411 pages plus references and appendices. One reason is that the text is occasionally repetitive (e.g. validity/reliability issues are discussed three times, basically with the same argumentation). Also, chapter 10.3 is not about implications to IS research but discussions about validity, so the same discussion comes back one more time.

It may also be argued that the writing tone is very positive towards egovernment. In some places it would have been better to see a more neutral tone as there is no need to convert the committee or the reader in general.

In spite of these objections, it is a very well-written thesis. It is easy to read and easy to follow the arguments.


Thomas Høgenhaven has submitted a substantial piece of research for his PhD thesis. He has adequately positioned his research in relation to current state of the art within eGovernment community, and he is drawing upon one of the reference disciplines of IS, HCI, and social psychology. In spite of the objections mentioned about, and given his area of interest, this seems like an appropriate choice.

He develops a relevant research framework and conducts four experiments on a live on-line community. Unfortunately, the vast majority of his hypotheses are not confirmed. In that way his contributions are then primarily limited to the extensive literature section and the framework of what he calls ‘Lean experimentation’. It is the opinion of the committee that he is contributing original insight in these areas.


Although the research results are not ground breaking and only to a very modest extent are contributing to our knowledge about increasing participation in open government communities, it is the opinion of the committee that Thomas Høgenhaven has demonstrated that he clearly has the skills as researcher. The vast majority of the reason why his research did not deliver the hypothesized results is not due to a faulty research design.

On this basis, the committee has decided to accept the thesis for an oral defense.

Lean Analytics book review: More more more more more

Posted on May 24, 2013

I recently purchased Alistar Croll & Benjamin Yoskovitz‘ book Lean Analytics. It’s a highly recommended read to basically anyone in the tech industry. One thing this book does very well is to describe other models and how the lean analytics approach relates to them (this is required to do in research but unfortunately a rare sight in more mainstream literature). The book focuses on various  startup types and stages, and describe which metrics are relevant to whom at what time. This post comprises what I find to be the most useful insights.

Starting with the startup

The two authors note that business models and marketing models are seen as substitutes when they are not: “Freemium isn’t a business model – it’s a marketing tactic” (p. 67). It is thus important to make proper distinctions between acquisition channel, selling tactic, revenue source, product type, and delivery model. These five things can be compiled in many different ways, so you should use them as a flipbook to build the right combination in your startup, see Figure 7.1 below:

Screen shot 2013-05-24 at 11.32.36 AM

Building on Dave McClure’s AAARR metrics (or the so called pirate metrics), users can add values in 5 ways:

  • Acquisition
  • Activation
  • Retention
  • Revenue
  • Referral

You should think about getting the users to do as many of these things as possible.

A startup stage model

  1. Empathy: Talk to many potential users and actual users in order to get feedback on the product and gain empathy for the users. At this early stage it’s critical to talk to them and get qualitative data.  It’s recommended to talk to at least 15 different people. And as suggested by many researchers, it is recommended to measure actual behavior and not just rely on self-reported data. Instead of asking a potential customer if she would buy the product, ask for the money right away (this is actually what Kickstarter does). It’s also recommended to bring prototypes in order to attain better feedback.
  2. Stickiness: After truly understanding the user, the next step is to build a sticky product – i.e. one that engages users. The product needs to be so good that users keep coming back. This will require many experiments and iterations. All development activities must evolve around the One Metric That Matters (see below). A sticky product requires a set of features (but not too many). Seven questions can help prioritize between features: (a)  Why will it make things better? (b) Can you measure the effects of the feature? (c) How long will the feature take to build? (d) Will the feature overcomplicate things? (e) how risky is the new feature? (f) how innovative is it? (g) Do users say they want it?
  3. Virality: When the product is sticky, it’s time to acquire new users. This virality engine has three drivers: (a) inherent virality – virality is an automatical byproduct of product usage. (b) Artificial virality – forced virality built into rewards meachnisms. (c) Word-of-mouth virality – existing users tell other users about it. It is crucial to understand that virality rarely happens by itself – it needs to be designed into the product.
  4. Revenue: Now it’s time to start examining if the product can be sufficiently monetized to build a sustainable business. The focus expands from building a product to building a business. In this phase, focus is on metrics such as customer lifetime value and customer acquisition costs.
  5. Scale: When your company is part of a larger ecosystem, you are probably ready to scale. This is a very hard part due to what Michael Porter calls the “hole in the middle” problem: When you are small you can compete on differentiated niche. When you are big you can compete on cost and margin. But being mid-sized is hard because you can’t focus, and you can’t dominate the ecosystem. In this phase it is important to create barriers of entry for potential competitors and thus establish an unfair advantage for your company. In this phase it is necessary to have more than one important metric that matters. A hierarchy of metrics is necessary in a full scale company, but it is important  to remain focused on as few metrics as possible.

Lying vs data

As entrepreneurs, we all need small lies. That creates reality distortion fields, necessary to pursue new ideas. But our lies need to be counterbalanced by data. More importantly, lies don’t help us learn. We need data and analytics to do this. But don’t get lost in the data. “Analytics is about tracking the metrics that are critical to your business.” (p. 9). This means there are no universal set of right analytics. As Avinash have been saying for years: They are highly context dependent. This makes it harder to write about, but this book does a pretty good job by distinguishing between different types of startups (although this list isn’t exhaustive): e-commerce, SaaS, mobile apps, media sites, UGC sites, and two-sided market places.

In the early phases of the startup, you should define the One Metric That Matters (OMTM). This metric should be the one that is most crucial to find out if you are on the right way toward stardom: i.e. number of signups, retention etc. The OMTM will help guide all decisions and experiments in the startup as the ultimate goal is to improve this one metric. This metric will change over time. Good metrics share these four traits:

  • Comparative
  • Understandable
  • Ratios or rates
  • Change the way we behave

Growth Engines: More more more more more

In order to grow your brand, it is relevant to focus on one of Eric Ries’ three engines of growth:

  • Sticky engine (retention). In order to find out when it’s time to start driving a revenue for the startup, it is crucial to measure and understand how sticky your product is. Engagement, number of visits and churn rates serve as good metrics for this. Stickiness is measured differently for different types of products/sites. For media and UGC sites, 17 minutes daily usage serves as the threshold for stickiness.
  • Virality engine. Getting virality coefficient > 1.01 is the holy grail for viral growth. Having such coefficient will in theory create eternal growth as each new user will recruit more than one other user.
  • Paid engine. Inbound and outbound acquisition channels that require direct or indirect paid involvement.

Or seen another way: Coca-Cola CMO Sergio Zyman has defined marketing as “selling more stuff to more people more often for more money more efficiently” (p. 64). This means business growth can come from five different types of more:

  • More stuff
  • More people
  • More often
  • More money (from each customer by upselling)
  • More efficiently

Read more about lean analytics on the book website.

How Values And Intentions Can Help Companies Scale

Posted on May 9, 2013

Growth companies usually face a significant problem: they need to scale operations while ensuring the output quality is not deteriorating. The standard way to do this is to have managers (or somebody else) control the output and make sure it is good enough. But this solution is suboptimal in so many ways: few people like to do this kind of work; few people like to be tediously controlled; and it is quite expensive.

A better way to do this is to design the tasks better in the first place and make sure everyone knows the overall vision and values as well as the intention of a given task. In his book, Start with why, Simon Sinek tells a great story about the two different ways of building a company:

There is a wonderful story of a group of American car executives who went to Japan to see a Japanese assembly line. At the end of the line, the doors were put on the hinges, the same as in America. But something was missing. In the United States, a line worker would take a rubber mallet and tap the edges of the door to ensure that it fit perfectly. In Japan, that job didn’t seem to exist. Confused, the American auto executives asked at what point they made sure the door fit perfectly. Their Japanese guide looked at them and smiled sheepishly. “We make sure it fits when we design it.” In the Japanese auto plant, they didn’t examine the problem and accumulate data to figure out the best solution—they engineered the outcome they wanted from the beginning. If they didn’t achieve their desired outcome, they understood it was because of a decision they made at the start of the process.

At the end of the day, the doors on the American-made and Japanese-made cars appeared to fit when each rolled off the assembly line. Except the Japanese didn’t need to employ someone to hammer doors, nor did they need to buy any mallets. More importantly, the Japanese doors are likely to last longer and maybe even be more structurally sound in an accident. All this for no other reason than they ensured the pieces fit from the start.

What the American automakers did with their rubber mallets is a metaphor for how so many people and organizations lead. When faced with a result that doesn’t go according to plan, a series of perfectly effective short-term tactics are used until the desired outcome is achieved. But how structurally sound are those solutions? So many organizations function in a world of tangible goals and the mallets to achieve them. The ones that achieve more, the ones that get more out of fewer people and fewer resources, the ones with an outsized amount of influence, however, build products and companies and even recruit people that all fit based on the original intention. Even though the outcome may look the same, great leaders understand the value in the things we cannot see.

Every instruction we give, every course of action we set, every result we desire, starts with the same thing: a decision. There are those who decide to manipulate the door to fit to achieve the desired result and there are those who start from somewhere very different. Though both courses of action may yield similar shortterm results, it is what we can’t see that makes long-term success more predictable for only one. The one that understood why the doors need to fit by design and not by default.

What Makes A Product Succeed? Robert Cooper’s 7 Principles

Posted on March 25, 2013

In 1986 Robert Cooper released his seminal work Winning At New Products. The (rather comprehensive) book’s most important contribution is seven principles to successfully build and launch new products. The research design is the same as Jim Collins uses in Good To Great: comparing successful companies with non-successful equivalent ones. Although this design is often critiqued, it’s arguably more valid than the high number of rather arbitrary case studies used in much contemporary innovation literature.

The 7 principles are as relevant today as they were in 1986. So they deserve more attention:

Yes I know - it's the wrong Cooper.

This guy pays attention – so should you (Yes I know – it’s the wrong Cooper.)

Continue Reading

SXSW: The Future of Google Search in a Mobile World With Amit Singhal And Guy Kawasaki

Posted on March 10, 2013

Amit Singhal is the Senior VP of Search at Google. In this session, Guy Kawasaki is  interviewing him.

  • Google has over 30 trillion URLs from 250 million domains in their index
  • Future of search = understanding knowledge, not just indexing and retrieving it.
    Google is moving from data to knowledge.
  • Right now, Google is the largest knowledge repository in the world. Amit wants to turn Google into a Startrek-like computer. As pointed out by Mike King, Amit talks about Star Trek a lot. But what does than analogy mean? I think it’s having a computer than can answer any question.
  • Google translate is important because it gives everyone access to the entire web despite language barriers.

    Continue Reading

SXSW: How to Rank Better in Google & Bing

Posted on March 9, 2013

SXSW Session: How to Rank Better in Google & Bing

Here are my tweets from the live coverage of Matt Cutts and Duane Forrester’s SXSW session with Danny Sullivan. Tweets are categorized after topic.

Storified by Thomas Høgenhaven· Fri, Mar 08 2013 18:46:10

On Spam And Penalties
This is pretty cool – see real time examples of pages being penalized by Google Webspam team here: #bingle #sxswThomas Høgenhaven
. @Mattcutts confirms penalizing 98,000 sites this week by hitting a link network #sxsw #bingleThomas Høgenhaven
.@Mattcutts: Google tries to find a merchants reputation to poor sites ranking despite links from bad press. Update in 2013 #bingle #sxswThomas Høgenhaven
- @Mattcutts Google doesn’t trust press releases and haven’t done it since 2006. But they are communicsting it more now #bingle #sxswThomas Høgenhaven
. @Mattcutts press releases might be good for getting honest citations from news papers. But they don’t parse any value per se #bingle #sxswThomas Høgenhaven
. @mattcutts: People spamming systemically over an extended period of time tend to draw quite some attention #bingle #sxswThomas Høgenhaven
. @Mattcutts if you are buying a domain with a penalty, send reconsideration request or disavow ALL existing links #bingle #sxswThomas Høgenhaven
. @Dannysullivan: White hat SEO wins in the long run. Each Google/Bing update tries help good SEO #HopeItsTrue #sxsw #bingleThomas Høgenhaven
On Crawling And Rankings
. @Mattcutts: Google can execute and parse basic javascript used in navigation. Pure AJAX sites is still challenging. #sxsw #bingleThomas Høgenhaven
. @Mattcutts: It doesn’t matter if a post is written by in house writers or freelancers. What matters is the quality. #sxsw #bingleThomas Høgenhaven
. @DuaneForrester assign value to all URLs, then try to get that URL to rank. #protip #bingle #sxswThomas Høgenhaven
. @Mattcutts: It doesn’t matter if a post is written by in house writers or freelancers. What matters is the quality. #sxsw #bingleThomas Høgenhaven
On Schema Markup
. @Mattcutts Authorship markup tends to increase CTR in SERPs (when photo is showing, that is) #bingle #sxswThomas Høgenhaven
Google and Bing are still testing how users respond to showing schema markup in SERPs. That’s why it’s only showing sometines #bingle #sxswThomas Høgenhaven
. @DuaneForrester: Schema data does not affect rankings (directly). It helps the search engines understand the site / content #bingle #sxswThomas Høgenhaven
On Facebook Graph Search
Is @Mattcutts worried about Facebook graph search? Not right now but can see a potential over time #bingle #sxswThomas Høgenhaven

How Uses Spam Tactics To Get Customer Reviews

Posted on March 4, 2013

I often use when booking hotels (I like hipmunk way better, but I can’t book directly through them – and I really don’t like Orbitz’ UX). Upon booking a hotel, I need an email confirmation with the booking. Makes it easy to retrieve the reservation upon reservation. My reliance on some emails means I cannot systematically report emails for spam, as I rely on these emails. But noone would abuse this reliance on some emails to send other emails without unsubscribe buttons, right?

Why Would I Report For Spam If I Could?

Don't Forget! Tell us about your recent stay at [insert name] Hotel

Think again! keeps sending two write review about your stay emails per stay. And here’s the catch: I cannot unsubscribe to these review emails. I totally get why want these reviews, but this does not justify the lack of unsubscribe options.

Continue Reading