I spent the summer reading everything I could about air pollution: here is what I learned

In case you are in a hurry, here is the short version.

I discovered that our health and cognitive ability immediately worsen when we are exposed to even low levels of air pollution (i.e., below or at WHO-established thresholds). These effects are so large that we can quantify them in percentage points of GDP lost. They are also very local and short-run. Hence, a city/region investing in reducing air pollution will immediately benefit economically (in terms of higher GDP). 

If we sum up its short-run and long-run negative effects, we see that the cost of air pollution is gigantic. It should be a central concern in Europe, where the race to find energy sources alternative to Russian gas is becoming frantic. But unfortunately, nobody in the policy circles seems to care. For example, subsidizing the burning of wood and other biomass is completely idiotic: it is sold as a “green option” on the rather disputable ground that it is carbon neutral. At the same time, it generates loads of air pollution (without mentioning the destruction of forests). 

 

In case you have time, here is the longer version.

This summer, I went through a bit of a rabbit hole: I read two books about air pollution (“Choked” by Beth Gardiner and “Cleaning the air” by Tim Smedley) and several research papers (especially from economists but not only). I was motivated by the sudden realization that my views on air pollution were deeply wrong. 

Mine was essentially the mainstream view based on scientific evidence produced until ten years ago. In short, I thought air pollution was a problem only:

(i) in places with extremely high levels of air pollution---for example, in India, China, and other cities in the developing world. 

(ii) in the long run. That is: non-extreme pollution levels (like the ones commonly experienced in cities across the developed world) may nonetheless have a detrimental effect after 20 or 30 years of exposure. 

But about ten years ago, several researchers (including economists studying health, labor, and education) started wondering whether low pollution levels can have short-run negative effects. Rather quickly, various studies showed that these negative effects exist and are very large. The recent pandemic provided even more evidence for this because the various lockdowns led to a decrease in air pollution and an immediate improvement in several health outcomes. Air pollution is not only a problem when it is extreme or in the long run (as I thought). It is a big problem also at low levels and in the short run.

The academic literature

Before discussing some of these recent studies, two comments. First, they mainly consider one specific pollutant: particulate matter with a diameter smaller than 2.5 (PM 2.5), measured in μg/m3 (micrograms per cubic meter of air). To give you a sense of the numbers, before the pandemic, the average annual concentration of PM 2.5 in some European and North American cities was: 23 μg/m3 in Milan; 15 μg/m3 in Paris; 13 μg/m3 in London, Los Angeles, and Chicago; 7 μg/m3 in New York (compare this with 100 μg/m3 in Dehli; 110 μg/m3 in Hotan, China; 90 μg/m3 in Lahore; 80 μg/m3 in Dhaka: source here).

This is not to say that PM 2.5 is the most dangerous pollutant. In fact, several recent studies show that ultrafine particles (those with a diameter smaller than 0.1 microns) can pass directly into the bloodstream, reaching various organs, including the brain, and are probably the most dangerous pollutant. Nonetheless, the concentration of PM2.5 and ultrafine particles correlates strongly, as they are both produced via combustion. 

Second, the vast majority of these studies look at relatively small variations in pollution levels. For example, the difference in pollution nearby a highway exit before and after the installation of some E-Zpass (an automatic electronic toll collection system). Or normal variations in pollution levels due to weather changes. Or those due to the installation of a cheap indoor air filter. The message is, therefore, that a small reduction in air pollution leads to measurable improvements in several outcomes---we can extrapolate what happens when we fully eliminate air pollution, but that is not what those studies do. 

Alright, what do those studies show? I put a detailed list at the end of the post, but to summarize: they consider pollution levels that are “normal” in the US and Europe and find that:

  • air pollution immediately worsens our cognitive ability: on more polluted days, stock traders earn less (lose more), chess players make more mistakes, judges take longer to decide on a verdict, baseball referees make more mistakes, and there are more traffic accidents.

  • air pollution negatively affects learning and school performance: kids tests’ scores are lower on more polluted days; kids learning outcomes are lower in more polluted years. Also, the effects are very large! For example, Gilraine (2020) finds that installing air purifiers in school increased student performance almost as much as if class sizes were reduced by a third.

  • air pollution has measurable instantaneous and short-run effects on our health. For example, on more polluted days, people have more heart attacks. Also, introducing an automatic toll system (E-ZPass) in some American cities reduced pollution in the surrounding areas, decreasing premature births and children born with low weight. Finally, exposure to higher levels of air pollution over 4-6 years increases the chance of developing dementia, Alzheimer's disease, ASD, Parkinson's disease, of having a stroke, as well as overall mortality.

Is eliminating air pollution cost-effective?

Given the above evidence, you may wonder whether we can quantify the cost of air pollution in terms of GDP; after all, people make more mistakes, are less attentive, and are more likely to be sick on more polluted days/years. This is exactly what a team from the OECD found. Focusing on air pollution variations due to weather events, they establish that  “a 1 μg/m3 increase in PM2.5 concentration [...] causes a 0.8% reduction in real GDP that same year. Ninety-five percent of this impact is due to reductions in output per worker, which can occur through greater absenteeism at work or reduced labor productivity.” A similar study in China found similar effects: a “1% decrease in PM2.5 nationwide increases gross domestic product by 0.039%” (Fu et al., 2021).

Taken at face value, these effects are gigantic. To give a sense, suppose that a city like Paris (where I live) was to eliminate air pollution. Also, consider the effect found in the OECD study and assume that they are linear, in the sense that the effect of reducing pollution from, say, 15 µg/m3 to 14 µg/m3 is equal to the effect of reducing pollution from say, 7 µg/m3 to 6 µg/m3 (which is probably not true but is nonetheless the best guess we can make). Paris’ GDP would jump by approximately 12% (!!). Or take Milan, a city where I used to live. If Milan were to eliminate air pollution, its GDP would jump by 18% (!!!!). Similarly, London's GDP would jump by approximately 10%, and New York's GDP would jump by approximately 6%. These numbers are huge, and they only consider short-term (i.e., year-on-year) benefits without including economic benefits accruing over a longer period, such as better overall health and educational outcomes. The overall effect on GDP is likely to be much larger. 

All right, this is the benefit side. How about the cost side? This is a tough question for which I could not find a precise answer. But I will try to figure out some numbers to get a sense of the ballpark. First, note the policies that eliminate air pollution are, more or less, the same policies that also eliminate greenhouse gas emissions: electrification of the means of transportation, clean energy production, and so on (more below). To get a sense of the cost of eliminating air pollution, we can therefore look at the cost estimate of the green transition, which a recent McKinsey report puts at  6.8/8.8 percent of global GDP between now and 2030 (of which approximately 2.8 percent of global GDP in new investment, and the rest investments already planned).

Can we compare the McKinsey number with the benefit of eliminating air pollution derived earlier? Well, we have to be careful with this comparison because:

  • The McKinsey number is an investment. Some of these investments will generate economic activity and should not be considered a cost. The overall cost is likely to be much, much smaller.

  • The McKinsey number is the investment required to achieve the green transition (and eliminate air pollution) over time. While the back-of-the-envelope calculations I made earlier are supposed to capture the benefit of instantaneously eliminating air pollution.

Nonetheless, I find it striking that the McKinsey number is in line with the benefit of eliminating air pollution I derived earlier. To me, it suggests a rather amazing possibility: that the green transition may pay for itself once we include in our cost-benefit analysis the short-run benefit of reducing air pollution!

Air pollution and climate change

Another fascinating angle is the relationship between air pollution and climate change. At a macro level, the two issues almost completely coincide because the primary cause of both problems is the combustion of fossil fuels.

(as an aside, I initially thought intensive animal farming was a problem for climate change because of methane emissions, but not for air pollution. It turns out that intensive animal farming is also an important source of air pollution, causing significant problems in the surrounding communities. Similarly, I thought that the shipping industry might be an issue for climate change but not for air pollution. I was wrong on this one, too: commercial ships burn one of the worst types of fuel available, and by doing so, they cause a very high level of pollution around big ports. Finally, airplanes emit greenhouse gasses, but I could not figure out their impact on air pollution. End of the long aside.) 

In my opinion, it is politically easier to push for the green transition by arguing that it will reduce air pollution rather than arguing that it will contrast climate change. The point is that the net benefit of contrasting climate change is very uncertain and probably negative for a single country  (of course, it is positive and large if we look at the world as a world, but very few decisions are taken at this level). For example, the US may invest in reducing its GHG emissions. But if China continues building coal power plants, the benefit of these investments (for the US) may be zero. If positive, it may not accrue in the US but in some other country --- say because droughts in Iran are less severe than if the US had not invested. On top of that, these benefits accrue very far in the future.  Of course, some people may be willing to make the required sacrifices out of an ethical principle, but the fraction of the population that only looks at cost and benefits will not be convinced. 


Instead, when the same policies are justified in terms of reduction of air pollution, then the benefit of these investments are local (i.e., nobody cares anymore about what China does) and accrue in the short term. You can expect cold-blooded business people, bankers, and economists (!) to join the protest on Friday!

But when we move away from the macro level to look at specific cases, we discover instances where there may be tension between fighting climate change and reducing air pollution. The most relevant at the moment is the burning of biomass (woods and pellets). The reason is that policymakers worldwide consider the burning of wood and pellet as a “climate friendly” source of energy, on the ground that if we re-plant another tree in place of the one that was burned. But this is not as clear as it looks because (i) the promise of re-planting could be hollow, (ii) the tree's age determines the ability to absorb CO2, with older trees absorbing more CO2 (iii) the soil’s ability to absorb COs is also reduced (see the discussion here). Overall, carbon-neutrality depends on the age of the tree that was cut relative to the age at which the replacing tree will be cut, and it is very difficult to calculate, let alone verify. On the other hand, it is quite clear that wood burning is a leading cause of air pollution. Furthermore, because of its touted green credentials, wood burning is a rapidly increasing source of pollution in developing countries. To add to the problem, European policymakers decided to subsidize the burning of wood and other biomass to compensate for the shortfall from the lack of Russian gas. Currently, the most visible outcome of this policy is the large-scale destruction of forests. This winter, it may cause unbreathable air in Europe. 


What to do about it

The most effective way to reduce air pollution is via political activism: communities of various sizes greatly impact the levels of air pollution on the street. In this respect, the book “cleaning the air” by Tim Smedley nicely summarizes how various cities worldwide were able to reduce air pollution and proposes a point-by-point blueprint that any city can follow. BTW, the central recommendation is to ban diesel cars because even the most modern ones pollute 20 to 50 times more than an equivalent gasoline car

(Another aside: the history of how European policymakers came to subsidize diesel cars is one of those gut-wrenching stories mixing bureaucratic ineptitude, corporate greed, cheating, and the death of millions due to air pollution, second only to the story of how lead ended up in gasoline and then in our bodies).

But this is not to say that individual behavior cannot make a difference; on the contrary!  Remember that small variations in air pollution exposure can have a measurable effect on cognitive ability and health. At the same time, the most dangerous type of pollution (ultrafine particles) decays rather rapidly as we move away from its source. The two facts together imply that our behavior matters tremendously. Walking along a polluted street or a side street; walking close to the cars or far away from them (if the sidewalk is large), installing air filters, choosing which widow to open (preferably one not giving on a busy street); planting trees and other plants that can filter ultrafine particles; all these choices do make a difference on the level of pollution you (and your loved ones) are exposed, and ultimately on your (and their) health. Again, I think Tim Smedley does a wonderful job explaining this, so I recommend reading his book.

Epilogue

At the end of this journey, I was left wondering: why are these facts not better known? Why don’t they feature more prominently in the policy debate? Part of the explanation is that air pollution is a rather complex topic. For example, despite my best efforts, it is still quite unclear to me how different pollutants transform into other pollutants once they enter the atmosphere (for example, somehow there may be no PM2.5 emissions, and nonetheless, some of it is produced in the atmosphere from the interaction of various other gases that are emitted).

But I think complexity is, at best, only part of the story. I believe the root cause is our innate reluctance to consider the “normal state of affairs” as extremely dangerous. To say it differently, you may see a newspaper article claiming that “pollution levels are three times the average and this is causing serious health issues”. You will never see a newspaper article claiming that “pollution levels are close to the average and this is causing serious health issues”. But this is exactly the point: what we have considered “normal” up until now is causing severe damage to our cognitive ability and health. How to make this message worth a newspaper title? I think answering this question is key to improving the air we breathe. 

Appendix: some notable studies mentioned in the post

  • Major-league baseball umpires: Archsmith et al. (2016) find that the number of incorrect calls increases by 2.6% when PM2.5 increases by 10g/m3

  • NYSE returns: Heyes et al. (2016) find that a 7g/m3 increase in PM2.5 in New York causes a same-day fall of 12%  in NYSE returns.

  • Individual investor's performance:  Huang et al. (2020) “find a negative relation between air pollution and trade performance.”

  • Chess players: Künn et al. (2019) find that ``an increase of 10 µg/m³ raises the probability of making an error by 1.5 percentage points and increases the magnitude of the errors by 9.4%.’’

  • Trial judges: Kahn & Li (2020) studied court cases in China. They find that a 1% increase in PM2.5 leads to a 0.182% increase in case duration (i.e., judges take longer to decide)

  • Traffic accidents: Sager (2019) looks at data from the UK and finds that “an increase of 0.3–0.6% in the number of vehicles involved in accidents per day for each additional 1 μg/m3 of PM2.5.”

  • Students’ test scores are lower on more polluted days (Ebenstein et al. 2016, Roth, 2021, Zhang et al. 2018, Gilraine and Zheng 2022);  

  • Students’ school achievements are lower in more polluted years (Ebenstein et al., 2016; Persico and Venator, 2021; Gilraine, 2020; Duque and Gilraine, 2020; Heissel et al., 2020; Mullen et al., 2020; Marcotte, 2017); 

  • The lockdown decreased air pollution in most US cities, which caused an instantaneous reduction in heart attacks  (Aung et al. 2022)

  • Dominici et al. (2022) studied the impact of pollution exposure well below WHO thresholds over four years. They find  “a 6% to 8% increased risk of mortality per 10 micrograms per cubic meter (μg/m3 ) increase in PM 2.5 exposure across the different analyses, with stronger associations at exposure levels below the current annual national standard of 12 μg/m3.”

  • Fu et al. 2019 “Short- and long-term PM2.5 exposure was associated with increased risks of stroke [...] and mortality [...] of stroke. Long-term PM2.5 exposure was associated with increased risks of dementia [...], Alzheimer's disease [...], ASD [...], and Parkinson's disease [...]

  • The relationship between pollution and premature births and low birth weight is so strong that the introduction of an automatic toll system (E-ZPass) in American cities reduced both problems in areas close to toll plazas (by 10.8 % and 11.8 %, respectively, Currie and Walker, 2011)

NFTs for art and collectibles

Lame copy of the image on the left

The totally-white painting on the left is worth millions of dollars. The identical totally-white painting on the right is worth nothing. Why is that? Because despite being identical in their look, the two paintings differ in their history: one was ``made’’ by a famous artist, who touched it, held it, and signed it. The second one, instead, was made by an ordinary person.


This example illustrates that the value of a piece of art lies, to large extent, in its ability to connect us to a famous artist. This principle is most evident in modern art, but it applies more broadly. There are countless examples of anonymous old paintings that jumped 100x in value after discovering that they were painted by some old master.  It is an essential aspect of the value of a piece of art, so much that a large part of the art world (galleries, auction houses, curators) is really in the business of re-establishing and verifying the link between the piece of art and its creator. By the way, you may wonder if this principle implies that every piece of crap an artist produces is, by definition, art. Well, kind of. 


The same logic also applies to collectibles: why the copy #1 of a famous book fetches a higher price than copy #5321 of the same book? Or why a pen that a famous person held to sign an important document is more valuable than an identical pen (same brand, model, year)? In both cases, the answer is that the two objects may be identical, and yet they have a different history. 

Enter blockchain, the public, tamper-proof infrastructure allowing anyone to send and receive various digital objects called tokens. Some of these tokens can be fungible because two tokens of the same type are completely interchangeable--- for example, a Bitcoin is identical to another Bitcoin. But tokens can also be non-fungible, that is, each token is ``one-of-a-kind’’ and different from all other tokens. Non-Fungible-Tokens (NFTs from now) can represent specific digital files, like a digital signature that applies to a specific file but not to an identical copy. Because of the public nature of blockchain, it can be used to easily track the history of ownership of a specific file back to its creator.

Thanks to NTFs, therefore, artists can produce digital art and, for the first time, distinguish an “original” version of this art from all other, bit-by-bit-identical copies. It connects the owner of a digital work of art with its creator. Even better, because this connection is digital, public, and tamper-proof, it can be verified without the need for curators, galleries, and the usual intermediaries of the art world. Finally, digital communities that have existed for decades and have already created their own culture and memes can also have their art and collectibles. Based on this premise, billions of dollars have been invested in NFTs.


But is this premise correct? To a large extent, yes, but there are also important differences between the world of NFTs and the world of physical art/collectibles.


To start, the fact that an NFT can be easily traced back to its creator means that it can be traced back to the address (for example, an Ethereum address) that created (or minted) the NFT. But who is the person and organization who controls this address? Are we sure it is the person/organization we expect? For example, someone could take an NFT, copy the underlying file (say, an image) use it to mint a new NFT, and then sell it as an original (this practice is called “copymint”). Anyone can verify that different addresses minted the original NFT and its copy, but who knows which address belongs to the artist who created the piece of art in the first place? This is where ``traditional’’ art intermediation can play an important role: art galleries, curation services and marketplace can help establish the connection between the address that minted an NFT and the person/organization that created the art.


Second, NFT allows to connect a digital file to an address and, hopefully, to the person who created it. If the person who created it is (or may become) a well-established artist, then the NFT may have financial value. If this person is just a random guy creating his first JPEG then, most likely, the value of this NFT is zero. This is an obvious observation, which was, however, often ignored in the NFT frenzy of 2021 / early 2022. To say it differently, it is quite difficult to invest in contemporary art, especially for someone who is an outsider: which young artist will become famous? Which famous artist will become super famous or will be forgotten? These are very tough questions, which also apply to the world of NFTs.


Third, the premise that a specific file is the “original” and an identical copy is not “original” is difficult to justify when you look closely.  Even on a single machine, files are constantly copied, for example, from RAM to ROM. Also, “uploading” a file means making a copy on some server (or on the blockchain). How can the original be the file uploaded on some server when a different copy is on the artist’s computer? Does an “original file” (to be distinguished from its copies) even exist? This points to an important issue: we identify a given file as the “original” because the artist says so, but there is nothing in the process that identifies that version as the original version.


Fourth, on the more positive side, NFTs allow something new: to create an ongoing relationship between the artist and the art owner. For example, you usually receive updates when you purchase a piece of software. Similarly, artists could send out updates to those who purchased their art, creating different versions of the same piece of art. Or send them gifts or additional items. Or create special events only for NFT owners. This is one of the most active areas of experimentation, with new and exciting ideas popping ours constantly. However, this new and exciting opportunity also raises several questions. If the point of an NFT is to become part of some sort of a club, in what sense is it art? Also, the JPEG may differ for each NFT, but the point of all of them is to create this special connection with the author; in what sense are they non-fungible? 


To conclude, NFTs hold the promise of making digital art and digital collectibles economically valuable by connecting specific files to their creators. However, it is important to keep in mind that several open issues still need to be resolved before this promise reaches its full potential.

A new token distribution mechanism - Repeated auctions with incremental vesting

For most blockchain startups, the initial distribution of their tokens matters tremendously. The reason is that selling tokens is both a way to raise funds and also a way to build an initial community of users, developers, and contributors to the project. Because there are strong network effects (i.e., people will use and contribute to the project only if other people also do so), getting these people on board at the beginning is crucial to a project's long-term success. However, despite several attempts (ICO, IEO, IDO, various types of airdrops, …), I would argue that we have not yet found a suitable token distribution mechanism, at least one that can be applied in general.

This article has two goals. First, to introduce several properties that a token distribution mechanism should satisfy. Second, introduce a new mechanism that meets such properties. This mechanism boils down to a sequence of auctions. Tokens sold in the first auction are immediately liquids; tokens sold in the subsequent auction vest over a given period (or better, in this context, are released slowly over a given period); tokens sold in the next auction have an even longer vesting period; and so on.

What do we want from a token distribution mechanism?

A token distribution mechanism has three main objectives:

(1) Raise funds to be used in the project's development.

(2) Make sure that the project’s core team has the right incentives: it should maintain a sufficient stake in the project and is committed for the long run.

(3) Engage the broader community of potential users/developers. This has to do with how the token is initially sold and what happens afterward.

There is also an important secondary objective is

(4) Get professional investors/speculators to participate

The reason is that professional investors and speculators have the resources/skills to do due diligence on the project and, by their activity, help discover the initial price of the token. I think this is an important element for achieving objective #3: if you have no idea what the correct price of a token is, then it is very hard to incentivize and motivate users and potential contributors properly.

I think most of the issues with existing token distribution mechanisms arise from the fact that, although (4) is necessary for (3) because it provides price discovery, in a very practical sense, there is a tension between (4) and (3) because professional investors and speculators may crowd out (or front run) other buyers. A somewhat naive solution is to airdrop (i.e., give away for free) the token to people who may contribute. But that, in practice, often means rewarding past behavior rather than incentivizing future contribution (airdropped tokens can be sold immediately or as soon as market conditions turn unfavorable).

A better token allocation mechanism

It seems to me that a natural way to meet all the above objectives is to use different vesting periods: users and contributors may be willing to hold tokens for a few years or even longer while doing so would be very costly for speculators. My idea is, therefore, to screen the different types of buyers who may be interested in buying tokens (i.e., investors/speculators vs. contributors) by imposing different vesting periods.

Imagine creating different "vintages" of tokens, each corresponding to different lockup periods. Hence, vintage 0 is immediately liquid, vintage 1 can be traded only after 6 months, vintage 2 can be traded only after 1 year and so on. You then run a series of auctions, one per vintage, starting with the more liquid one. The first one should attract speculators and therefore helps with the price discovery element. All subsequent auctions are less valuable for speculators and more valuable for users. The price should progressively go down (due to the inconvenience of holding the token), allowing users and contributors to purchase the token cheaply. Finally, in general, auctions are an excellent mechanism to raise revenues. I expect this to be the case here as well.

This mechanism is also helpful in achieving objective #2. The tokens allocated to the core team should vest, and the length of the vesting should be significantly longer than the longest-vesting tokens sold at the auction. I don’t know what the right numbers are—perhaps the longest-vesting tokens sold at the auction should vest over 3 or 4 years, and the core team ones should vest over 5 or 6. In any case, the point is that this mechanism also helps to put some structure on how the core team is incentivized.

One can, of course, embellish the mechanism in various ways. For example, the token could start trading in a DEX after the first auction (the liquid one). Any new info related to the project (or even aggregate movements in the crypto market) should affect the price of the token on the DEX. Users who purchase tokens with a longer vesting period get a higher discount relative to the DEX price. This does not necessarily mean that the sequence of auction prices is decreasing, but I believe it nonetheless captures the idea that those who hold the token for longer get a larger reward.

Concluding thoughts

When considering token allocation mechanisms, most people worry about volatility in the price of tokens after the initial allocation. The reason is that a random drop in the token price may cause token holders to panic and sell their tokens, depressing the price and inducing further sales. The resulting spiral can kill a project precisely because all those who are supposed to contribute to the project (per objective #3 above), end up not holding tokens anymore and therefore not having any stake in the project.

The mechanism described earlier does not directly address this issue but makes it irrelevant: project contributors’ are supposed to purchase tokens that vest over a period of time. Short-run variations in the price of tokens should not affect their commitment to the project. If anything, they may want to contribute more when the price is low as a way to increase the token’s future price (when they expect to sell).

Any comment? Please leave them below.

Recent article on Insead Knowledge on privacy, blockchain and the current pandemic

In a nutshell: we know how to manage third-party generated personal data (such as driving licenses, health care records, passports, ...) in a private but verifiable way using blockchain. This technology should be expanded as much as possible.

However, we do not yet have the technology to handle user generated personal data (for example location data) in the same way. This is key to be able to do automated, digital contact tracing in a private but verifiable way. I propose some ideas on how this could be done, again using blockchain.

https://knowledge.insead.edu/blog/insead-blog/safeguarding-privacy-in-a-pandemic-13956

Private contact tracing via blockchain and secure multi-party computation

Premise (1): To make it easier to comment/share, I posted this article also on Linkedin (here). Please, comment/share there if you have anything to contribute.

Premise (2): I'm an economist working on blockchain, not a computer scientist. I will probably butcher some of the technical terminology --- apologies for that.

Aggressive contact tracing (for example using cell phone data) seems to have worked quite well in South Korea. In shorts, each person in a country downloads an app, which records all the movements of this person. If someone tests positive to COVID-19, it is now possible to alert everyone who has been in close proximity with this person (see this article for more details). The issue is that recording all your movements and storing this information on some third-party server is less than ideal from the privacy view point (for a more detailed discussion, see this great article by Yuval Noah Harari).

My idea: maybe there is a better way to manage the trade off between public health and privacy by using a combination of blockchain and secure multi-party computation.

In a nutshell: the data recorded by the app on the phone are encrypted locally and uploaded somewhere. Using secure multi-party computation, it should be possible (but here I need more input from experts) to figure out whether any two traces overlap at any time---that is, whether two people were ever in the same place at the same time, without ever decrypting the data. This way, you can flag those who need to be tested/quarantined without ever knowing their movements.

Such system can probably work without blockchain if implemented at a national level. But, as we all very well know, no country is an island: such system works best if everybody on the planet participates. Blockchain is precisely the technology that allows multiple countries/regions/people to jointly maintain a large and complex database.

The above description is, clearly, just a first step (which may not even be technically feasible). Other elements that will be required are:

  • some form of authentication, to be able to link the encrypted trace to a person (or at least to a set of unique bio-metric attributes)

  • a dedicated blockchain where to store these identity information together with a pointer to the encrypted data (most likely stored not directly on the blockchain)

  • a way in which public health authorities can send out warnings to people who may have been infected.

  • many more things...

In any case, interested in knowing your opinion.

My research with Thomas Gall featured on Insead Knowledge

On blockchain, regulation and pornography

I’m most familiar with the academic literature on Blockchain, especially within Econ and Finance. I’m also somewhat familiar with the discussion about Blockchain in the private sector, especially with respect to startups. But I know very little about the policy debate regarding blockchain. More precisely, I knew almost nothing until last week, when I have attended the OECD Global Blockchain Policy Forum a two-day events fully dedicated to discussing policy and blockchain.

Reflecting back on these two days, I think there are two common themes that were present in all presentations / panel discussions / side chat.

The first one is that regulators and policymakers are quite prepared, also on the technical aspects, but their reference point is the technology as of (more or less) 4 years ago. To some extent, I think this is normal, even healthy. You do not want to start regulate / write policy based on the latest, most frontier technology. The frontier is by definition a moving target: it constantly changes and many things that are super cool now will eventually fail and be abandoned. Trying to regulate it would be a waste of time for the regulators, and probably also a serious constraint on technological development.

However, I though that many speakers / panelists were a bit too far from the technological frontier. This came across quite clearly during a panel on the new FATF regulation. In short, this new regulation tries to impose the same anti money laundering / anti terrorist financing rules that apply to “regular” financial institutions to the crypto world - for example crypto exchanges. Someone in the room raised the issue of decentralized exchanges: the fact that we already have technology that allows people to sidestep traditional exchanges and trade/exchange crypto tokens essentially in a peer-to-peer way. Those on stage replied that they will worry about decentralized exchanges when they become mainstream, somewhat implying that the existing regulation can be adapted to cover those exchanges as well.

This brings me to the second common theme that emerged during the conference: the assumption that it is always possible to adapt the current regulatory framework to every possible new technology. Or, to say it with Bruno Le Maine (the French minister for Economic and Finance) words “as society, we have our value. Technology will not change those values. Rather, as a government, we will find a way to fit any new technology with our values” (I’m paraphrasing here, I don’t remember the exact words). But I think this is not true, which brings me (finally!) to pornography.

Before the internet, the flow of information was quite regulated. The prime example was pornography: you could access it only if you were 18 or above, otherwise no porns for you. This regulation was quite effective. But then Internet came and now the only thing that stands between anybody and porn is a checkbox. Similar rules existed for health and financial advice: before internet only some categories of people were allowed to communicate to the public regarding certain subjects, now everybody can. But porn is more salient for me, because Internet showed up in my home town in Italy when I was 14. As a boy of that age, for me the association between Internet and porn was quite strong: no more bribing the above-age cousin of a friend or a friend, everything was now at my fingertips.

So what has happened to those regulations and values? Well, regulation became extremely difficult to enforce. As a consequence, regulators and law enforcers now focus exclusively on the very nasty stuff (child pornography, revenge porn and so on) leaving alone consumers of regular, legal pornography independently of their age. With respect to our values, I think it is fair to say that we now consider the issue of access to pornography a matter to be regulated at the family level, rather than at the state level. So, yes, both regulation and our values have changed as a consequence of technological developments.

This historical precedent teaches an important lessons: that some technological developments make existing regulation simply impossible to enforce. I think ignoring this possibility is dangerous, because it could lead to years wasted trying to enforce rules that are just not enforceable, at great cost for all those people who need to follow these rules.

Just to clarify: I’m not saying this is what will happen with Blockchain. What I’m saying is simply that I wish regulators and policymakers would keep this in mind as a possibility.

Libra, the strange beast

At announcement, I read the white-paper and the accompanying documents. I then waited for people smarter then me to make sense of it (with some progress here and here). Then last week I attended the second OECD Blokchain policy forum where Bertrand Perez, COO and Deputy Managing Director Libra Association presented (btw, his was just one of many super interesting presentations that I will discuss in other posts). Despite all this, I still don’t get it.

The first thing to know is that Libra is both a blockchain and a cryptocurrency. As a blockchain, it is similar to Ethereum, in the sense that it has a scripting language that can be used to create smart contract. It could be used, for example, to manage people’s identity and data. I suspect that, following recent regulation and recent events, someone at Facebook reached the conclusion that owning a mountain of personal data can also be a liability. Moving some of these data (for example, everything that has to do with authentication) “to the blockchain” could be a way to reduce this liability. In any case, if kept open, it could become the central infrastructure around which several other services are built.

From the purely engineering viewpoint, once you have a blockchain the easiest thing you can do with it is building a cryptocurrency, which may explain why they started one. But what they produced is, from the economic viewpoint, something that makes no sense, at least to me.

To start, Libra is supposed to be a stable currency backed by a basket of currencies plus safe short-term assets. But this is a contradiction: the value of a basket of currency is by definition NOT stable. For example, take a basket composed by 50% US Dollars and 50% Euros. Because the EUR/USD exchange rate fluctuates, then the value of this basket will not be constant neither with respect to the dollar nor with respect to the Euro.

Whoever wrote the white paper is probably aware of this, which is why the volatility of Libra is compared to that of other cryptocurrencies. Indeed, Libra will most likely be less volatile than most cryptocurrency and hence should be preferred for everyday transactions to cryptocurrencies (which is not very informative: the only thing more volatile than some cryptocurrencies is ice-cream in the summer). But by this same logic, people in the US should prefer the US Dollar (in one of its electronic form such as Venmo) to Libra because the US Dollar is the most stable thing there is relative to the US Dollar. Similarly, people in Kenya (or in most other African country) should prefer electronic Shillings (via MPesa) to Libra. This it to say: there is absolutely no reason people should use Libra for everyday payment, outside maybe people living in extremely dysfunctional places such as Venezuela. And even there, these people will probably want to hold Libra rather than using it as a currency, which is what Libra promoters believe should not happen.

And then there is the most absurd claim of all: that Libra is a tool to foster financial inclusion. Now, a good chunk of my research is about financial inclusion. I have been to several places in Africa, and tried out various mobile money systems. And I can tell you: they are everywhere, in a way that is difficult to understand for people living in the “developed” world. For example, paying a bus ticket with your phone is just normal in many African countries, while it sounds like science fiction in most “developed” countries. It even works with old “dumb” phones. Of course, financial inclusion remains a huge problem, but not so much with respect to bringing electronic money to poor people around the world —-this is rapidly being solved via mobile money. The problem is providing these same people with some forms of savings accounts (i.e., something that generates an interest and can be used for long-term planning) and access to credit.

My takeaway is that whoever wrote the white paper has no clue about what “financial inclusion” is, and yet emphatically claims that libra will solve it. Furthermore, the whole thing is just plain illogical. Mobile money systems do not yet reach 100% of the population: for the poorest of the poor even a dumb phone may be too expensive. But then, how can a fancy, cutting edge, latest technology, blockchain based cryptocurrency reach those who are left behind by a system that works even on dumb phones?

So what is Libra (the currency)? Is it a severely flawed product? Or it is a perfectly fine product given its goals, which are however different from what officially stated? I have no idea, but I think regulators are right to be worried.

What the regulator should know about Blockchain: no-coin, old-coins, legit-coins, shit-coins

Last week I spent a day at the Join Research Center of the European Commission discussing Fintech and, more specifically, blockchain and cryptoassets. Despite the fact that my presentation was very academic, most of the following discussion had a clear "policy" angle.  This got me thinking about the regulatory issues related to blockchain. This piece is the outcome of these ruminations. My attempt is to classify what is going on in the blockchain world, in relation to what the regulator should know/do, starting from "low priority" moving to "high priority" stuff. Comments are welcomed!

No-coin

Several companies (IBM, Walmart, ...) are working on private or semi-private blockchains. This is basically "blockchain as a shared database" among different actors, either part of the same consortium or part of the same supply chain.

I have to admit that this is the application of blockchain that I least understand. The reason is that, as a shared database, blockchain is quite bad---traditional solutions are much faster and efficient. The only advantage of a blockchain, which may be relevant in some contexts, is that data maintained by a blockchain do not belong to anyone in particular (equivalently, they belong to the entire network), whereas traditional solutions require an organization that maintains the data and therefore has control over them. In some applications, this control may be problematic.

From the regulatory standpoint, the only issue I see is that if data do not belong to anyone and are instead "on the blockchain," it is not clear who is responsible for making sure that these data comply with whatever regulation exists (for example, regulation about how long data should be kept, when data should be erased, ...). In most cases, it will be a matter of making sure that the blockchain is designed so that the resulting data comply with existing regulations. In other cases it will be about designating an "authority" that can edit the data maintained on the blockchain. 

Old-coins

A second avenue that is being explored is the so called "tokenization" of existing assets. In this case, an existing class of assets (shares in a company, ownership titles, future contracts, ...) is exchanged "on the blockchain" rather than via traditional methods. For example, the company Overstock is apparently planning to launch a blockchain based stock exchange. From the regulatory standpoint, we are facing a well understood asset class, with a well defined regulation that should be followed whether the asset is traded on the blockchain or not.

Despite this, I think "tokenization" opens significant regulatory challenges. The reason is that a large fraction of current regulation assumes that retail investors can access financial products only via financial intermediaries. Hence, to make sure that your average pensioner stays clear of complex financial products, current regulation forbids financial intermediaries from offering such products to this category of investors. When such products are "tokenized" and sold on the blockchain, there is no intermediary anymore. Current regulation will need to adapt to a world in which finance is more and more disintermediated. 

Legit-coins vs shit-coins

All other projects can be placed into either of two bins. The first bin contains what I call legit-coins. They are crypto-assets that have potential value because they are necessary in order to use a specific software (in this case, a blockchain-based protocol). They are a novel asset class, they should exist, thrive, but, of course, they also require a sensible regulation (some ideas on this later).

The second bin contains what I call shit-coins. These are crypto-assets that derive their value from an action that someone will perform in the future, but are not old-coins. To say it in another way, these are assets that are sold together with a "promise to do something" (either implicit or explicit) without being a contract. They are what the regulator should most worry about, because they are sold to investors on the basis of a false premise: that the seller is under an obligation to deliver something (or do something).

Unfortunately, shit-coins abound. For example, any tokens that have value because "the holder can redeem it for USD/EUR/..." falls into this category because its value depends on a given organization complying with this promise, which they may not do (probably not too surprising for those of you who followed the Tether saga). Tokens that are supposed to have value because "we will distribute profits among token holders" also fall in this category.

An interesting corollary is that any tokens that is necessary in order to use a not-yet-available or close-source software should be considered as shit-coins. The reason is that the organization controlling the software is making an implicit promise: that only a specific token will be used with their software. But absent a contract, this is an empty promise: the software can be changed so to accept other tokens, greatly reducing the value of the initial token. Quite different is when a token is necessary to operate an existing, open-source piece of software. Of course, the fact that a software is open-source does not guarantee that the developers will not change it later in a way that hurts investors. But in this case, at least, anybody can fork the software making such changes less likely.

An interesting side note is that the difference between legit- and shit-coins is often whether a particular action depends on software or humans. For example, the DAO was a smart-contract that, among other things, would have redistributed profits among the token holders, and hence was, according to my classification "legit" (it turns out there was a bug and it did not go as intended, but that is a different story). Another example is the use of smart contracts to create "stable coins", that is, tokens that maintain a stable value because backed by assets that are accessible by a smart contract and not by humans.

Is it all good with legit-coins?

I think legit-coins are a legitimate asset class, but similarly to all other asset classes, they also require a sensible regulation. On this topic I refer you to a recent working paper of mine. From the regulatory standpoint, the two takeaways from that paper are that startups (or, more broadly, developers) behind a blockchain projects should maintain skin in the game: always have a large share of total tokens on their "balance sheet." Which seems quite obvious until you realize that most startups sell 90% of their tokens at ICO, distribute some more to advisers and early investors, so that in the end very little is left with the people who are supposed to work hard and improve the software. The second is that the tokens held by the developers behind a project should vest for a non-trivial period (say 5 years), while currently most ICOs have no vesting at all, or a vesting period of only 1/2 year, after which everybody is free to sell their tokens and retire.

 

 

 

"Financial incentives for open source development: the case of Blockchain"

Finally a research paper on a "hot" topic!  If you are interested in the financial side of blockchain (ICOs, cryptocurrencies, and so on), don't forget to check it out:

https://www.dropbox.com/s/z477ya37fjsf4w7/Canidio-blockchain-software-development.pdf?dl=1

The point of the paper is to show that the way blockchain project are financed have an effect on the developers incentives to work hard. The exercise is to ignore everything else and see how these incentives determine the value of the protocol and the price of the token (see page 16, when I talk about the law of motion of the price). Of course, nobody should take this literally because there are so many other things that matter which are not in the model. Still, the paper has interesting results regarding when a team should hold an ICO (as late as possible), the stock of tokens that should stay with the dev team (as high as possible, contrary to the common "no central bank" creed), the potential value of different forms of vesitng. I think it is also relevant to investors, as they should be aware that there is nothing that prevent developers from offloading all their tokens and stop their work. In fact, this is supposed to happen "in the equilibrium of the model", that is, it is not a remote possibility at all!

Depth vs novelty in research: differences between disciplines and across time.

I think that, with some degree of approximation, we can summarize the quality of a piece of research by two variables. The first is the novelty of the research question asked. I call this variable n. The second is how exhaustive the answer to this question is. I call this variable d for depth.

We can think of the importance of a given piece of research (call it V for value) as determined by both n and d

V= α n + d
where α determines the relative importance of novelty vs depth. V in turns determines the standing of a specific piece of research: how well is published, how widely is read, its influence on subsequent works and so on.

I think that α is discipline specific. For example, papers in marketing, strategy, oganizational behavior usually ask super interesting research questions. To my eyes, however, the answers to these questions are often highly incomplete. My interpretation is that these disciplines have a high α. Similarly for psychology: a super interesting research question followed by an experiment with 10 subjects. On the other hand of the spectrum I would put mathematics. Most ground-breaking, super influential math papers provide very detailed answers to well known puzzles. Not only, but mathematicians have the habit of throwing math puzzles at each others (sometimes via blogs), as if the novelty of a research question is not particluarly important to them, but providing the answer is. Using the above framework, therefore, I can say that math has an α close to zero. Economics (my discipline) is somewhat in between: both the novelty of the research question and the depth of the answer matter in how a piece of research is evaluated. As a consequence, if a researcher thinks that he/she has stumbled upon an extremely novel research question, he/she will probably not blast it to the world without first also having produced a research paper (of course, exceptions to this rule exist!). At the same time, research papers often have endless appendixes, that are supposed to prove that the results are actually robust.

Before I say anything else, it is important to clarify one thing: in every discipline there are research papers that are both extremely novel and extremely deep (maybe yours!). Those are the top papers: they have very high V and are extremely influential. But to think about α, you need to think about the papers that are just below a given threshold (for example, a threshold for publication). Then you have to ask: is it more likely that this paper crosses the thresholds if it improves on the n dimension or on the d dimension? The point I'm making is that the answer to this question depends on the discipline we are considering.

To some extent, the specific tools employed by each discipline are actually a function of α. Taking this logic to its extreme, we can say that mathematicians are a group of people with a very low α; and as a consequence they employ math. Economists have, on average, an intermediate α. As a consequences, economists use math and statistics in a somewhat rigorous way, but are willing to cut some corners (relative to pure mathematicians) in order to provide an answer to a question they think is interesting. Other disciplines have an even higher α and therefore are happy to use case studies or work with very few observations to answer their questions, provided that those questions have a high n.

Finally, I think that α is also time specific, that is, there are subtle shifts in α over time. These shifts determine subtle changes in the type of research that is read/published in a given discipline, and in the methods used. If I had to take a wild guess on where we are heading with α, I would say that it is increasing over time: novelty will become more important. I say this because we live in an era in which information (including scientific research) is almost completely freely available. Hence, the limiting factor in the consumption of information is not the availability of information itself, but rather the availability of complementary inputs such as attention and time. Obviously, attention has more to do with n than with d: I'm more likely to read past the title of a paper if I think that the research question is interesting.

Does all this matter? Well, it matters if you are a researcher, especially a young one. You should know what the α of your discipline (or your subdiscipline) is and where it is heading, and write your papers accordingly. Second, it matters for the general direction of research. If α is indeed increasing, then we may be heading to a world in which a lot of interesting questions are being asked, but not very many deep answers are given. How does such a world look like? Well, this is definitely a very interesting research question!

p.s. Of course, the assumption that we can describe all research in all fields by simply 2 variables is quite heroic. In particular, depth may mean different things in different disciplines (number of equations, number of observations, length of the questionnaire, ...). So not only alpha changes with discipline/time, but also how we measure d. But, hey, this is a blog post and therefore mostly about n than d!

 

The vast majority of ICOs are seriously flawed, here is why and what to do about it.

Initial Coin Offering (ICO) are becoming the main way in which blockchain-based projects are financed. In short (and with few simplifications): a group of developers comes up with a new blockchain-based protocol. Together with the protocol, the developers create a token (that is, a new cryptocurrency), that will be used together with the protocols. Some of these tokens are sold to investors, who buy them in the expectation that the protocol will be successful and hence that the token will have a use and a value. The remaining tokens are allocated to the developers working on the project. 

For some examples of such projects see Sia, Storij, Golem

The consensus is that ICOs are revolutionary because they allow groups of developers to raise funds even if they are not organized as a company.  Similarly to open source projects, several developers can work collaboratively, contribute code, squash bugs, add features, ... all outside the usual corporate structure. But unlike traditional open source projects, by holding the token related to the project they contribute to, developers can also reap an economic payoff. We therefore have the best of two worlds: openness & collaboration outside the straitjacket of traditional corporation; strong financial incentive to deliver a product that works.

Or this is what most commentator think. But personally I have some doubt.

Standard ICOs are not effective at generating effort from developers (warning, some “econ language” below)

The price of a coin (and of any other asset) is a function of the present discounted value of the stream of dividends (or more broadly future benefits) that the coin holders expect to earn. Hence, if all investors are identical, in every period the price of a coin must be such that an investor is indifferent between holding the coin (end enjoying its future benefit) or selling it. 

[NOTE: My argument can be easily extended to the case in which investors are differentially patient --- and therefore solve differently this trade off between future and present reward --- or have heterogeneous beliefs regarding the stream of dividends. But it is easier to explain with identical investors.]

This implies that the price of a coin should depend on the effort that investors expect the developers to put into the project. If investors expect the developers to work hard and the product to be good, they should also expect that holding the token will generate high future benefits. It follows that the price today must be high so to make the investors indifferent between holding the token and selling it. Similarly, the expectation of low effort by the developers should translate into a low price today.

What I want to argue is that, if developers are allowed to sell their tokens on the market then we should expect the effort put in by the developers to be zero (or, more in general, at its minimum). I’m going to argue this by contradiction, that is, I’m going to show that any other possibility leads to an inconsistency. Suppose that developers are expected to put in some positive level of effort. Given this effort, investors estimate the stream of future benefits and, therefore, the equilibrium price is determined. As argued before, this price is such that the investor is indifferent between holding the coin and selling it. But note that, if exerting effort has a cost, at the price at which the investor is indifferent, the developers will strictly prefer to sell their coins. Intuitively, by selling a developer gets the reward generated by him putting effort into the project, without actually putting any effort. But after he sells he has no reason to put any effort. The only logical possibility is that there is no effort---ICO are not effective at creating incentives for developers. 


Nice story, but too simplistic.

Of course, the above reasoning may break down if we introduce additional elements. I consider here some of these possible additional elements. In my opinion, the takeaway is largely unchanged: developers’ effort will be small and short lasting at best.

(1) Developers like to code, and will put in effort even if there are no financial rewards. Point well taken, after all very successful open source projects rely almost exclusively on free work by skilled developers. But this simply qualifies my argument to: ICO cannot generate effort beyond what developers will do anyway for free.

(2) Plenty of developers became extremely rich via ICO. First of all, making someone rich and generating effort are two different things: giving a contractor 100 bucks before he begins to work will for sure make him richer, but probably won’t make him work harder. Also, what I’m saying is that developers won’t put any effort after the ICO. They for sure have incentives to work hard before the ICO so to ship a product that have some value even if the post-ICO effort will be low.

(3) By monitoring the developers wallets, we can check whether the developers sell their token. Knowing this, the price will drop if the developers try to sell, which means that they are unable to walk away with the big reward without earning it. That works only if it is unfeasible for the developers to short the token (or short some other token that is sufficiently correlated with the first one). If shorting is possible, again, the developer can easily cash in before doing any actual work.

(4) Put the developers token in a smart contract that disburses token slowly over time.  See the above point: taking an appropriate short position allows the developers to cash in, and then be indifferent to the movement of the price of the token.

(5) There are talented developers out there who can produce something valuable also at zero effort. Investors do not know whether the developers behind the project are talented. By working hard the developers can prove to the investors that they are talented. Okay, maybe. But this simply implies that effort won’t stop at the ICO, but a bit later, as soon as the developers convince the investors that they are talented.

(6) Developers and investors disagree on the future benefit generated by the project. If developers are more optimistic than investors, they may want to hold on to their token (and work hard) rather then sell at the prevailing market price. Okay, maybe. But, similarly to the above point, this logic may imply that developers will work hard for some times. The reason is that this differences in beliefs will eventually shrinks as the project matures and its value becomes clearer.

Side note: does this criticism apply to traditional ways to raising money?

No, because stocks in a company that is is not publicly traded are difficult and costly to sell (and even harder to short), especially if the company is at an early stage. This lack of liquidity is why founders and early employees are all well motivated to work hard.

In case of publicly traded companies, it is illegal for executives to short their company’s stocks (unless they report this publicly). As a consequence, stock options are considered an effective way to generate effort.

Finally, shareholders can, in theory, fire the management of a company if they are unhappy with its performance, which works as an incentive to work hard. This is not possible for token holders.

What can we do about it

If the group of developers acts in a coordinated way (maybe because they are all working for the same company), this problem can be avoided by allocating a large fraction of the total supply of tokens to the developers. The reason is that a pile of coins that is large enough becomes somewhat illiquid, in the sense that you cannot sell it all at once without destroying the market. You also cannot short your position if it is too large. You are forced to sell slowly over time, effectively keeping your skin into the success of the project.

If instead developers do NOT act in a coordinated way, each individual developer won’t think of himself as able to influence the market price. We are therefore back to the logic exposed earlier. The only difference is that, if developers collectively hold a large share of the market and they all sell in an uncoordinated way, they will effectively destroy the market.

The relevant question is therefore: can we create a mechanism by which a group of developers acts in a coordinated way (so to anticipate the effect of their decision to sell their token on the price of the token) but outside a traditional company structure? 

I think this is possible. For example, a large fraction of tokens (say 40%) are set aside to reward developers. All these tokens are put into a fund. A second token is created, representing ownership to the fund. These second tokens are distributed to developers, and cannot be traded. Once a year the developers vote on what fraction of the fund to liquidate and send to its owners.

The key aspect of the above mechanism is voting: each person participating into the vote should anticipate that whatever is decided may end up affecting the market price, and therefore realize that they can’t liquidate all at once, but rather slowly over time.

This mechanism also makes it difficult to short. The token representing ownership in the fund is not treadable and therefore cannot be shorted. You could short the underlying token. But what is tricky here is that your exposure to the price of this token depends on the outcomes of all future votes. If you knew what these outcomes were, you could anticipate how many token you will receive each year and build an appropriate shorting strategy. But you don’t know it, so I think it is going to be extremely hard to be perfectly hedged.


Conclusion

I think most of the ICOs we have seen so far will turn out to be ineffective at creating incentives for developers belonging to non-traditional, open-source style projects. However, some changes in the way ICOs are conducted may make ICOs truly effective. I propose one such change.