My research with Thomas Gall featured on Insead Knowledge

On blockchain, regulation and pornography

I’m most familiar with the academic literature on Blockchain, especially within Econ and Finance. I’m also somewhat familiar with the discussion about Blockchain in the private sector, especially with respect to startups. But I know very little about the policy debate regarding blockchain. More precisely, I knew almost nothing until last week, when I have attended the OECD Global Blockchain Policy Forum a two-day events fully dedicated to discussing policy and blockchain.

Reflecting back on these two days, I think there are two common themes that were present in all presentations / panel discussions / side chat.

The first one is that regulators and policymakers are quite prepared, also on the technical aspects, but their reference point is the technology as of (more or less) 4 years ago. To some extent, I think this is normal, even healthy. You do not want to start regulate / write policy based on the latest, most frontier technology. The frontier is by definition a moving target: it constantly changes and many things that are super cool now will eventually fail and be abandoned. Trying to regulate it would be a waste of time for the regulators, and probably also a serious constraint on technological development.

However, I though that many speakers / panelists were a bit too far from the technological frontier. This came across quite clearly during a panel on the new FATF regulation. In short, this new regulation tries to impose the same anti money laundering / anti terrorist financing rules that apply to “regular” financial institutions to the crypto world - for example crypto exchanges. Someone in the room raised the issue of decentralized exchanges: the fact that we already have technology that allows people to sidestep traditional exchanges and trade/exchange crypto tokens essentially in a peer-to-peer way. Those on stage replied that they will worry about decentralized exchanges when they become mainstream, somewhat implying that the existing regulation can be adapted to cover those exchanges as well.

This brings me to the second common theme that emerged during the conference: the assumption that it is always possible to adapt the current regulatory framework to every possible new technology. Or, to say it with Bruno Le Maine (the French minister for Economic and Finance) words “as society, we have our value. Technology will not change those values. Rather, as a government, we will find a way to fit any new technology with our values” (I’m paraphrasing here, I don’t remember the exact words). But I think this is not true, which brings me (finally!) to pornography.

Before the internet, the flow of information was quite regulated. The prime example was pornography: you could access it only if you were 18 or above, otherwise no porns for you. This regulation was quite effective. But then Internet came and now the only thing that stands between anybody and porn is a checkbox. Similar rules existed for health and financial advice: before internet only some categories of people were allowed to communicate to the public regarding certain subjects, now everybody can. But porn is more salient for me, because Internet showed up in my home town in Italy when I was 14. As a boy of that age, for me the association between Internet and porn was quite strong: no more bribing the above-age cousin of a friend or a friend, everything was now at my fingertips.

So what has happened to those regulations and values? Well, regulation became extremely difficult to enforce. As a consequence, regulators and law enforcers now focus exclusively on the very nasty stuff (child pornography, revenge porn and so on) leaving alone consumers of regular, legal pornography independently of their age. With respect to our values, I think it is fair to say that we now consider the issue of access to pornography a matter to be regulated at the family level, rather than at the state level. So, yes, both regulation and our values have changed as a consequence of technological developments.

This historical precedent teaches an important lessons: that some technological developments make existing regulation simply impossible to enforce. I think ignoring this possibility is dangerous, because it could lead to years wasted trying to enforce rules that are just not enforceable, at great cost for all those people who need to follow these rules.

Just to clarify: I’m not saying this is what will happen with Blockchain. What I’m saying is simply that I wish regulators and policymakers would keep this in mind as a possibility.

Libra, the strange beast

At announcement, I read the white-paper and the accompanying documents. I then waited for people smarter then me to make sense of it (with some progress here and here). Then last week I attended the second OECD Blokchain policy forum where Bertrand Perez, COO and Deputy Managing Director Libra Association presented (btw, his was just one of many super interesting presentations that I will discuss in other posts). Despite all this, I still don’t get it.

The first thing to know is that Libra is both a blockchain and a cryptocurrency. As a blockchain, it is similar to Ethereum, in the sense that it has a scripting language that can be used to create smart contract. It could be used, for example, to manage people’s identity and data. I suspect that, following recent regulation and recent events, someone at Facebook reached the conclusion that owning a mountain of personal data can also be a liability. Moving some of these data (for example, everything that has to do with authentication) “to the blockchain” could be a way to reduce this liability. In any case, if kept open, it could become the central infrastructure around which several other services are built.

From the purely engineering viewpoint, once you have a blockchain the easiest thing you can do with it is building a cryptocurrency, which may explain why they started one. But what they produced is, from the economic viewpoint, something that makes no sense, at least to me.

To start, Libra is supposed to be a stable currency backed by a basket of currencies plus safe short-term assets. But this is a contradiction: the value of a basket of currency is by definition NOT stable. For example, take a basket composed by 50% US Dollars and 50% Euros. Because the EUR/USD exchange rate fluctuates, then the value of this basket will not be constant neither with respect to the dollar nor with respect to the Euro.

Whoever wrote the white paper is probably aware of this, which is why the volatility of Libra is compared to that of other cryptocurrencies. Indeed, Libra will most likely be less volatile than most cryptocurrency and hence should be preferred for everyday transactions to cryptocurrencies (which is not very informative: the only thing more volatile than some cryptocurrencies is ice-cream in the summer). But by this same logic, people in the US should prefer the US Dollar (in one of its electronic form such as Venmo) to Libra because the US Dollar is the most stable thing there is relative to the US Dollar. Similarly, people in Kenya (or in most other African country) should prefer electronic Shillings (via MPesa) to Libra. This it to say: there is absolutely no reason people should use Libra for everyday payment, outside maybe people living in extremely dysfunctional places such as Venezuela. And even there, these people will probably want to hold Libra rather than using it as a currency, which is what Libra promoters believe should not happen.

And then there is the most absurd claim of all: that Libra is a tool to foster financial inclusion. Now, a good chunk of my research is about financial inclusion. I have been to several places in Africa, and tried out various mobile money systems. And I can tell you: they are everywhere, in a way that is difficult to understand for people living in the “developed” world. For example, paying a bus ticket with your phone is just normal in many African countries, while it sounds like science fiction in most “developed” countries. It even works with old “dumb” phones. Of course, financial inclusion remains a huge problem, but not so much with respect to bringing electronic money to poor people around the world —-this is rapidly being solved via mobile money. The problem is providing these same people with some forms of savings accounts (i.e., something that generates an interest and can be used for long-term planning) and access to credit.

My takeaway is that whoever wrote the white paper has no clue about what “financial inclusion” is, and yet emphatically claims that libra will solve it. Furthermore, the whole thing is just plain illogical. Mobile money systems do not yet reach 100% of the population: for the poorest of the poor even a dumb phone may be too expensive. But then, how can a fancy, cutting edge, latest technology, blockchain based cryptocurrency reach those who are left behind by a system that works even on dumb phones?

So what is Libra (the currency)? Is it a severely flawed product? Or it is a perfectly fine product given its goals, which are however different from what officially stated? I have no idea, but I think regulators are right to be worried.

What the regulator should know about Blockchain: no-coin, old-coins, legit-coins, shit-coins

Last week I spent a day at the Join Research Center of the European Commission discussing Fintech and, more specifically, blockchain and cryptoassets. Despite the fact that my presentation was very academic, most of the following discussion had a clear "policy" angle.  This got me thinking about the regulatory issues related to blockchain. This piece is the outcome of these ruminations. My attempt is to classify what is going on in the blockchain world, in relation to what the regulator should know/do, starting from "low priority" moving to "high priority" stuff. Comments are welcomed!

No-coin

Several companies (IBM, Walmart, ...) are working on private or semi-private blockchains. This is basically "blockchain as a shared database" among different actors, either part of the same consortium or part of the same supply chain.

I have to admit that this is the application of blockchain that I least understand. The reason is that, as a shared database, blockchain is quite bad---traditional solutions are much faster and efficient. The only advantage of a blockchain, which may be relevant in some contexts, is that data maintained by a blockchain do not belong to anyone in particular (equivalently, they belong to the entire network), whereas traditional solutions require an organization that maintains the data and therefore has control over them. In some applications, this control may be problematic.

From the regulatory standpoint, the only issue I see is that if data do not belong to anyone and are instead "on the blockchain," it is not clear who is responsible for making sure that these data comply with whatever regulation exists (for example, regulation about how long data should be kept, when data should be erased, ...). In most cases, it will be a matter of making sure that the blockchain is designed so that the resulting data comply with existing regulations. In other cases it will be about designating an "authority" that can edit the data maintained on the blockchain. 

Old-coins

A second avenue that is being explored is the so called "tokenization" of existing assets. In this case, an existing class of assets (shares in a company, ownership titles, future contracts, ...) is exchanged "on the blockchain" rather than via traditional methods. For example, the company Overstock is apparently planning to launch a blockchain based stock exchange. From the regulatory standpoint, we are facing a well understood asset class, with a well defined regulation that should be followed whether the asset is traded on the blockchain or not.

Despite this, I think "tokenization" opens significant regulatory challenges. The reason is that a large fraction of current regulation assumes that retail investors can access financial products only via financial intermediaries. Hence, to make sure that your average pensioner stays clear of complex financial products, current regulation forbids financial intermediaries from offering such products to this category of investors. When such products are "tokenized" and sold on the blockchain, there is no intermediary anymore. Current regulation will need to adapt to a world in which finance is more and more disintermediated. 

Legit-coins vs shit-coins

All other projects can be placed into either of two bins. The first bin contains what I call legit-coins. They are crypto-assets that have potential value because they are necessary in order to use a specific software (in this case, a blockchain-based protocol). They are a novel asset class, they should exist, thrive, but, of course, they also require a sensible regulation (some ideas on this later).

The second bin contains what I call shit-coins. These are crypto-assets that derive their value from an action that someone will perform in the future, but are not old-coins. To say it in another way, these are assets that are sold together with a "promise to do something" (either implicit or explicit) without being a contract. They are what the regulator should most worry about, because they are sold to investors on the basis of a false premise: that the seller is under an obligation to deliver something (or do something).

Unfortunately, shit-coins abound. For example, any tokens that have value because "the holder can redeem it for USD/EUR/..." falls into this category because its value depends on a given organization complying with this promise, which they may not do (probably not too surprising for those of you who followed the Tether saga). Tokens that are supposed to have value because "we will distribute profits among token holders" also fall in this category.

An interesting corollary is that any tokens that is necessary in order to use a not-yet-available or close-source software should be considered as shit-coins. The reason is that the organization controlling the software is making an implicit promise: that only a specific token will be used with their software. But absent a contract, this is an empty promise: the software can be changed so to accept other tokens, greatly reducing the value of the initial token. Quite different is when a token is necessary to operate an existing, open-source piece of software. Of course, the fact that a software is open-source does not guarantee that the developers will not change it later in a way that hurts investors. But in this case, at least, anybody can fork the software making such changes less likely.

An interesting side note is that the difference between legit- and shit-coins is often whether a particular action depends on software or humans. For example, the DAO was a smart-contract that, among other things, would have redistributed profits among the token holders, and hence was, according to my classification "legit" (it turns out there was a bug and it did not go as intended, but that is a different story). Another example is the use of smart contracts to create "stable coins", that is, tokens that maintain a stable value because backed by assets that are accessible by a smart contract and not by humans.

Is it all good with legit-coins?

I think legit-coins are a legitimate asset class, but similarly to all other asset classes, they also require a sensible regulation. On this topic I refer you to a recent working paper of mine. From the regulatory standpoint, the two takeaways from that paper are that startups (or, more broadly, developers) behind a blockchain projects should maintain skin in the game: always have a large share of total tokens on their "balance sheet." Which seems quite obvious until you realize that most startups sell 90% of their tokens at ICO, distribute some more to advisers and early investors, so that in the end very little is left with the people who are supposed to work hard and improve the software. The second is that the tokens held by the developers behind a project should vest for a non-trivial period (say 5 years), while currently most ICOs have no vesting at all, or a vesting period of only 1/2 year, after which everybody is free to sell their tokens and retire.

 

 

 

"Financial incentives for open source development: the case of Blockchain"

Finally a research paper on a "hot" topic!  If you are interested in the financial side of blockchain (ICOs, cryptocurrencies, and so on), don't forget to check it out:

https://www.dropbox.com/s/z477ya37fjsf4w7/Canidio-blockchain-software-development.pdf?dl=1

The point of the paper is to show that the way blockchain project are financed have an effect on the developers incentives to work hard. The exercise is to ignore everything else and see how these incentives determine the value of the protocol and the price of the token (see page 16, when I talk about the law of motion of the price). Of course, nobody should take this literally because there are so many other things that matter which are not in the model. Still, the paper has interesting results regarding when a team should hold an ICO (as late as possible), the stock of tokens that should stay with the dev team (as high as possible, contrary to the common "no central bank" creed), the potential value of different forms of vesitng. I think it is also relevant to investors, as they should be aware that there is nothing that prevent developers from offloading all their tokens and stop their work. In fact, this is supposed to happen "in the equilibrium of the model", that is, it is not a remote possibility at all!

Depth vs novelty in research: differences between disciplines and across time.

I think that, with some degree of approximation, we can summarize the quality of a piece of research by two variables. The first is the novelty of the research question asked. I call this variable n. The second is how exhaustive the answer to this question is. I call this variable d for depth.

We can think of the importance of a given piece of research (call it V for value) as determined by both n and d

V= α n + d
where α determines the relative importance of novelty vs depth. V in turns determines the standing of a specific piece of research: how well is published, how widely is read, its influence on subsequent works and so on.

I think that α is discipline specific. For example, papers in marketing, strategy, oganizational behavior usually ask super interesting research questions. To my eyes, however, the answers to these questions are often highly incomplete. My interpretation is that these disciplines have a high α. Similarly for psychology: a super interesting research question followed by an experiment with 10 subjects. On the other hand of the spectrum I would put mathematics. Most ground-breaking, super influential math papers provide very detailed answers to well known puzzles. Not only, but mathematicians have the habit of throwing math puzzles at each others (sometimes via blogs), as if the novelty of a research question is not particluarly important to them, but providing the answer is. Using the above framework, therefore, I can say that math has an α close to zero. Economics (my discipline) is somewhat in between: both the novelty of the research question and the depth of the answer matter in how a piece of research is evaluated. As a consequence, if a researcher thinks that he/she has stumbled upon an extremely novel research question, he/she will probably not blast it to the world without first also having produced a research paper (of course, exceptions to this rule exist!). At the same time, research papers often have endless appendixes, that are supposed to prove that the results are actually robust.

Before I say anything else, it is important to clarify one thing: in every discipline there are research papers that are both extremely novel and extremely deep (maybe yours!). Those are the top papers: they have very high V and are extremely influential. But to think about α, you need to think about the papers that are just below a given threshold (for example, a threshold for publication). Then you have to ask: is it more likely that this paper crosses the thresholds if it improves on the n dimension or on the d dimension? The point I'm making is that the answer to this question depends on the discipline we are considering.

To some extent, the specific tools employed by each discipline are actually a function of α. Taking this logic to its extreme, we can say that mathematicians are a group of people with a very low α; and as a consequence they employ math. Economists have, on average, an intermediate α. As a consequences, economists use math and statistics in a somewhat rigorous way, but are willing to cut some corners (relative to pure mathematicians) in order to provide an answer to a question they think is interesting. Other disciplines have an even higher α and therefore are happy to use case studies or work with very few observations to answer their questions, provided that those questions have a high n.

Finally, I think that α is also time specific, that is, there are subtle shifts in α over time. These shifts determine subtle changes in the type of research that is read/published in a given discipline, and in the methods used. If I had to take a wild guess on where we are heading with α, I would say that it is increasing over time: novelty will become more important. I say this because we live in an era in which information (including scientific research) is almost completely freely available. Hence, the limiting factor in the consumption of information is not the availability of information itself, but rather the availability of complementary inputs such as attention and time. Obviously, attention has more to do with n than with d: I'm more likely to read past the title of a paper if I think that the research question is interesting.

Does all this matter? Well, it matters if you are a researcher, especially a young one. You should know what the α of your discipline (or your subdiscipline) is and where it is heading, and write your papers accordingly. Second, it matters for the general direction of research. If α is indeed increasing, then we may be heading to a world in which a lot of interesting questions are being asked, but not very many deep answers are given. How does such a world look like? Well, this is definitely a very interesting research question!

p.s. Of course, the assumption that we can describe all research in all fields by simply 2 variables is quite heroic. In particular, depth may mean different things in different disciplines (number of equations, number of observations, length of the questionnaire, ...). So not only alpha changes with discipline/time, but also how we measure d. But, hey, this is a blog post and therefore mostly about n than d!