Depth vs novelty in research: differences between disciplines and across time.

I think that, with some degree of approximation, we can summarize the quality of a piece of research by two variables. The first is the novelty of the research question asked. I call this variable n. The second is how exhaustive the answer to this question is. I call this variable d for depth.

We can think of the importance of a given piece of research (call it V for value) as determined by both n and d

V= α n + d
where α determines the relative importance of novelty vs depth. V in turns determines the standing of a specific piece of research: how well is published, how widely is read, its influence on subsequent works and so on.

I think that α is discipline specific. For example, papers in marketing, strategy, oganizational behavior usually ask super interesting research questions. To my eyes, however, the answers to these questions are often highly incomplete. My interpretation is that these disciplines have a high α. Similarly for psychology: a super interesting research question followed by an experiment with 10 subjects. On the other hand of the spectrum I would put mathematics. Most ground-breaking, super influential math papers provide very detailed answers to well known puzzles. Not only, but mathematicians have the habit of throwing math puzzles at each others (sometimes via blogs), as if the novelty of a research question is not particluarly important to them, but providing the answer is. Using the above framework, therefore, I can say that math has an α close to zero. Economics (my discipline) is somewhat in between: both the novelty of the research question and the depth of the answer matter in how a piece of research is evaluated. As a consequence, if a researcher thinks that he/she has stumbled upon an extremely novel research question, he/she will probably not blast it to the world without first also having produced a research paper (of course, exceptions to this rule exist!). At the same time, research papers often have endless appendixes, that are supposed to prove that the results are actually robust.

Before I say anything else, it is important to clarify one thing: in every discipline there are research papers that are both extremely novel and extremely deep (maybe yours!). Those are the top papers: they have very high V and are extremely influential. But to think about α, you need to think about the papers that are just below a given threshold (for example, a threshold for publication). Then you have to ask: is it more likely that this paper crosses the thresholds if it improves on the n dimension or on the d dimension? The point I'm making is that the answer to this question depends on the discipline we are considering.

To some extent, the specific tools employed by each discipline are actually a function of α. Taking this logic to its extreme, we can say that mathematicians are a group of people with a very low α; and as a consequence they employ math. Economists have, on average, an intermediate α. As a consequences, economists use math and statistics in a somewhat rigorous way, but are willing to cut some corners (relative to pure mathematicians) in order to provide an answer to a question they think is interesting. Other disciplines have an even higher α and therefore are happy to use case studies or work with very few observations to answer their questions, provided that those questions have a high n.

Finally, I think that α is also time specific, that is, there are subtle shifts in α over time. These shifts determine subtle changes in the type of research that is read/published in a given discipline, and in the methods used. If I had to take a wild guess on where we are heading with α, I would say that it is increasing over time: novelty will become more important. I say this because we live in an era in which information (including scientific research) is almost completely freely available. Hence, the limiting factor in the consumption of information is not the availability of information itself, but rather the availability of complementary inputs such as attention and time. Obviously, attention has more to do with n than with d: I'm more likely to read past the title of a paper if I think that the research question is interesting.

Does all this matter? Well, it matters if you are a researcher, especially a young one. You should know what the α of your discipline (or your subdiscipline) is and where it is heading, and write your papers accordingly. Second, it matters for the general direction of research. If α is indeed increasing, then we may be heading to a world in which a lot of interesting questions are being asked, but not very many deep answers are given. How does such a world look like? Well, this is definitely a very interesting research question!

p.s. Of course, the assumption that we can describe all research in all fields by simply 2 variables is quite heroic. In particular, depth may mean different things in different disciplines (number of equations, number of observations, length of the questionnaire, ...). So not only alpha changes with discipline/time, but also how we measure d. But, hey, this is a blog post and therefore mostly about n than d!

 

The vast majority of ICOs are seriously flawed, here is why and what to do about it.

Initial Coin Offering (ICO) are becoming the main way in which blockchain-based projects are financed. In short (and with few simplifications): a group of developers comes up with a new blockchain-based protocol. Together with the protocol, the developers create a token (that is, a new cryptocurrency), that will be used together with the protocols. Some of these tokens are sold to investors, who buy them in the expectation that the protocol will be successful and hence that the token will have a use and a value. The remaining tokens are allocated to the developers working on the project. 

For some examples of such projects see Sia, Storij, Golem

The consensus is that ICOs are revolutionary because they allow groups of developers to raise funds even if they are not organized as a company.  Similarly to open source projects, several developers can work collaboratively, contribute code, squash bugs, add features, ... all outside the usual corporate structure. But unlike traditional open source projects, by holding the token related to the project they contribute to, developers can also reap an economic payoff. We therefore have the best of two worlds: openness & collaboration outside the straitjacket of traditional corporation; strong financial incentive to deliver a product that works.

Or this is what most commentator think. But personally I have some doubt.

Standard ICOs are not effective at generating effort from developers (warning, some “econ language” below)

The price of a coin (and of any other asset) is a function of the present discounted value of the stream of dividends (or more broadly future benefits) that the coin holders expect to earn. Hence, if all investors are identical, in every period the price of a coin must be such that an investor is indifferent between holding the coin (end enjoying its future benefit) or selling it. 

[NOTE: My argument can be easily extended to the case in which investors are differentially patient --- and therefore solve differently this trade off between future and present reward --- or have heterogeneous beliefs regarding the stream of dividends. But it is easier to explain with identical investors.]

This implies that the price of a coin should depend on the effort that investors expect the developers to put into the project. If investors expect the developers to work hard and the product to be good, they should also expect that holding the token will generate high future benefits. It follows that the price today must be high so to make the investors indifferent between holding the token and selling it. Similarly, the expectation of low effort by the developers should translate into a low price today.

What I want to argue is that, if developers are allowed to sell their tokens on the market then we should expect the effort put in by the developers to be zero (or, more in general, at its minimum). I’m going to argue this by contradiction, that is, I’m going to show that any other possibility leads to an inconsistency. Suppose that developers are expected to put in some positive level of effort. Given this effort, investors estimate the stream of future benefits and, therefore, the equilibrium price is determined. As argued before, this price is such that the investor is indifferent between holding the coin and selling it. But note that, if exerting effort has a cost, at the price at which the investor is indifferent, the developers will strictly prefer to sell their coins. Intuitively, by selling a developer gets the reward generated by him putting effort into the project, without actually putting any effort. But after he sells he has no reason to put any effort. The only logical possibility is that there is no effort---ICO are not effective at creating incentives for developers. 


Nice story, but too simplistic.

Of course, the above reasoning may break down if we introduce additional elements. I consider here some of these possible additional elements. In my opinion, the takeaway is largely unchanged: developers’ effort will be small and short lasting at best.

(1) Developers like to code, and will put in effort even if there are no financial rewards. Point well taken, after all very successful open source projects rely almost exclusively on free work by skilled developers. But this simply qualifies my argument to: ICO cannot generate effort beyond what developers will do anyway for free.

(2) Plenty of developers became extremely rich via ICO. First of all, making someone rich and generating effort are two different things: giving a contractor 100 bucks before he begins to work will for sure make him richer, but probably won’t make him work harder. Also, what I’m saying is that developers won’t put any effort after the ICO. They for sure have incentives to work hard before the ICO so to ship a product that have some value even if the post-ICO effort will be low.

(3) By monitoring the developers wallets, we can check whether the developers sell their token. Knowing this, the price will drop if the developers try to sell, which means that they are unable to walk away with the big reward without earning it. That works only if it is unfeasible for the developers to short the token (or short some other token that is sufficiently correlated with the first one). If shorting is possible, again, the developer can easily cash in before doing any actual work.

(4) Put the developers token in a smart contract that disburses token slowly over time.  See the above point: taking an appropriate short position allows the developers to cash in, and then be indifferent to the movement of the price of the token.

(5) There are talented developers out there who can produce something valuable also at zero effort. Investors do not know whether the developers behind the project are talented. By working hard the developers can prove to the investors that they are talented. Okay, maybe. But this simply implies that effort won’t stop at the ICO, but a bit later, as soon as the developers convince the investors that they are talented.

(6) Developers and investors disagree on the future benefit generated by the project. If developers are more optimistic than investors, they may want to hold on to their token (and work hard) rather then sell at the prevailing market price. Okay, maybe. But, similarly to the above point, this logic may imply that developers will work hard for some times. The reason is that this differences in beliefs will eventually shrinks as the project matures and its value becomes clearer.

Side note: does this criticism apply to traditional ways to raising money?

No, because stocks in a company that is is not publicly traded are difficult and costly to sell (and even harder to short), especially if the company is at an early stage. This lack of liquidity is why founders and early employees are all well motivated to work hard.

In case of publicly traded companies, it is illegal for executives to short their company’s stocks (unless they report this publicly). As a consequence, stock options are considered an effective way to generate effort.

Finally, shareholders can, in theory, fire the management of a company if they are unhappy with its performance, which works as an incentive to work hard. This is not possible for token holders.

What can we do about it

If the group of developers acts in a coordinated way (maybe because they are all working for the same company), this problem can be avoided by allocating a large fraction of the total supply of tokens to the developers. The reason is that a pile of coins that is large enough becomes somewhat illiquid, in the sense that you cannot sell it all at once without destroying the market. You also cannot short your position if it is too large. You are forced to sell slowly over time, effectively keeping your skin into the success of the project.

If instead developers do NOT act in a coordinated way, each individual developer won’t think of himself as able to influence the market price. We are therefore back to the logic exposed earlier. The only difference is that, if developers collectively hold a large share of the market and they all sell in an uncoordinated way, they will effectively destroy the market.

The relevant question is therefore: can we create a mechanism by which a group of developers acts in a coordinated way (so to anticipate the effect of their decision to sell their token on the price of the token) but outside a traditional company structure? 

I think this is possible. For example, a large fraction of tokens (say 40%) are set aside to reward developers. All these tokens are put into a fund. A second token is created, representing ownership to the fund. These second tokens are distributed to developers, and cannot be traded. Once a year the developers vote on what fraction of the fund to liquidate and send to its owners.

The key aspect of the above mechanism is voting: each person participating into the vote should anticipate that whatever is decided may end up affecting the market price, and therefore realize that they can’t liquidate all at once, but rather slowly over time.

This mechanism also makes it difficult to short. The token representing ownership in the fund is not treadable and therefore cannot be shorted. You could short the underlying token. But what is tricky here is that your exposure to the price of this token depends on the outcomes of all future votes. If you knew what these outcomes were, you could anticipate how many token you will receive each year and build an appropriate shorting strategy. But you don’t know it, so I think it is going to be extremely hard to be perfectly hedged.


Conclusion

I think most of the ICOs we have seen so far will turn out to be ineffective at creating incentives for developers belonging to non-traditional, open-source style projects. However, some changes in the way ICOs are conducted may make ICOs truly effective. I propose one such change.

The Hungarian power grab

I'm really sadden by the news coming out of Hungary: the Hungarian government is shutting down CEU, a world-class university that does not cost a dime neither to the Hungarian government nor to the majority of its students. The reason: it is one of the last remaining independent institutions in Hungary. With this, the power grab is complete.

 

Some background:

https://www.nytimes.com/2017/04/04/world/europe/hungary-george-soros-university.html?_r=0

https://www.washingtonpost.com/news/global-opinions/wp/2017/04/04/hungarys-xenophobic-attack-on-central-european-university-is-a-threat-to-freedom-everywhere/?utm_term=.fbb9a7fc10ae

 

 

 

An endless source of amusement - seriously, some of these threads are so funny you should not read them at work

A Reddit thread about the most notable Reddit thread in history

Where I learned that:

And r/NFL is the subreddit for Super Bowl info instead of its own subreddit because r/superbowl is about superb owls.

 /r/JohnCena is about potato salad and /r/potatosalad is about John Cena

The subreddit /r/marijuanaenthusiasts is about trees because /r/trees was taken by marijuana enthusiasts.

Post-it notes left in apartment. about a guy who was was forgetting he left himself post-its because of CO poisoning

The guy who pretended he didn't know what potatoes were (this is so funny that it is not safe for work)

a rant about grilled cheese

Trolling in Spanish (when you do not like " taco shows" you get trolled in Spanish)

and much more! 

Should you jump on the ‪#‎chatbots‬ bandwagon? Maybe not quite yet!

Facebook is betting big on chatbots. Should you do it too? Are chatbots the future? The discussion is currently raging, and here at Team Up Start Up we decided to summarize the current debate for you.
 
AI is the future
The premise of the debate is that artificial intelligence (AI) is the future of computing. There is little controversy around this fact, and all biggest players are currently investing heavily in this space.
 
How will AI be delivered?
What is not clear is how AI will be delivered to consumers. For example, some of the earliest application of AI are smart scheduling assistants. In this case, AI resides inside your inbox – in the sense that you interact with it via emails. Another likely way we will interact with AI-powered services is through our phone. In this respect, google now already scans your inbox/calendar/feed/searches and suggests itineraries, interesting stories, and so on before you ask. Chats/messages are another potential delivery channel.

...

Read More

"Not all practice makes perfect"

"I have devoted my career to understanding exactly how practice works to create new and expanded capabilities, with a particular focus on those people who have used practice to become among the best in the world at what they do. And after several decades of studying these best of the best—these “expert performers,” to use the technical term—I have found that no matter what field you study, music or sports or chess or something else, the most effective types of practice all follow the same set of general principles.

...

[people] assume that someone who has been driving for 20 years must be a better driver than someone who has been driving for five, that a doctor who has been practicing medicine for 20 years must be a better doctor than one who has been practicing for five, that a teacher who has been teaching for 20 years must be better than one who has been teaching for five.

But no. Research has shown that, generally speaking, once a person reaches that level of “acceptable” performance and automaticity, the additional years of “practice” don’t lead to improvement. If anything, the doctor or the teacher or the driver who’s been at it for 20 years is likely to be a bit worse than the one who’s been doing it for only five, and the reason is that these automated abilities gradually deteriorate in the absence of deliberate efforts to improve."

 

http://nautil.us/issue/35/boundaries/not-all-practice-makes-perfect