‘What’s wrong Donald? You seem stressed today.’

Advertisements
‘What’s wrong Donald? You seem stressed today.’

London

“I’ve been back a number of times since I left, but it was during two visits in the past few months that I encountered something different: fear for the future and a questioning by many non-Britons of whether they even belong here any more

“Even for those that haven’t talked about leaving, there’s something fundamentally ruptured in their relationship with the country,” said Ian Dunt, editor of the website Politics.co.uk. “When people say they’re very anti-immigration, no one thinks that’s directed at German architects or French lawyers. But even those people are beginning to feel that the country is becoming cold and meanspirited and indifferent to their presence, if not openly hostile toward them.”

London

Pondering productivity

Today I stumbled upon an interesting piece on the productivity paradox. In the short article Ryan Avent, senior editor at the Economist, argues that the productivity slowdown in the US and technological progress at the same time are not necessarily a paradox.

“This is a critical point. People ask: if robots are stealing all the jobs then why is employment at record highs? But imagine what would happen if someone unveiled a robot tomorrow which could do the work of 30% of the workforce. Employment wouldn’t fall 30%, because while some of the displaced workers might give up on work and drop out of the labour force, most couldn’t: they need the money. They would seek out other work, glutting HR offices and employment centres and placing downward pressure on the wage companies need to offer to fill a job: until wages fall to such a low level that people do give up on work entirely, drop out of the labour force, and live on whatever family resources they have available, or until it becomes economical to hire people to do very low productivity work”

It does appear to be the fact that both phenomena are occurring in the US, as some quick google searches showed me: people are dropping out of the labour force and wage growth hasn’t been particularly impressive in recent years. It is however worth pointing out that the same is not necessarily the case in the EU or its major economies.

In any case, Avent’s main argument is that we should not necessarily be looking for the effects of technological progress in employment statistics, but in inequality statistics.

Another explanation for the pardox is suggested by Matthew Yglesias from Vox, who claims that technological progress hasn’t in fact impacted our work (and hence productivity) as much as our leisure. 

“Data from the American Time Use Survey, for example, suggests that on average Americans spend about 23 percent of their waking hours watching television, reading, or gaming. With Netflix, HDTV, Kindles, iPads, and all the rest, these are certainly activities that look drastically different in 2015 than they did in 1995 and can easily create the impression that life has been revolutionized by digital technology.

[…]

… It becomes clearer and clearer over time that smartphones and the internet simply aren’t economic game changers on the same scale as air conditioning, jet planes, container ships, and televisions…”

The sector most affected by technological change is ITC in itself, as well as the media, which further reenforce the idea about the ever-present progress. Meanwhile, real humans still have to make the pizza we order via our smartphone.

“These days people are perhaps more likely to book a reservation or order a takeout meal with an app rather than a phone call, but the core work of serving and preparing food has seen very little progress.

At the higher end of the salary spectrum, we still don’t have robot doctors who can treat patients in lieu of costly and inconvenient human ones. Indeed, we can’t even get medical records digitized properly.”

The third, explanation has to do with the nature of output produced by technology companies. Much of it is unquantifiable and not captured by output statistics, as suggested by Google’s Hal Varian.  This more conceptual argument is also worth considering, but – like Yglesias – I find this explanation least helpful.

Varian’s example is a free app, which allows people to track each other’s location to facilitate meeting, instead of looking for each other in crowded areas. As far as I understand, his argument is that much like housework the service provided by the app company is not reflected in output statistics. Even if the app was used by companies to – for example – locate goods or people in warehouses, the service provided by the app company would not be reflected anywhere.

However, following up on this peculiar example, one can also make the argument that if the app really was revolutionising the speed at which goods can be located, one would expect an increase in productivity, anyway, as more goods can be located in a short amount of time. And as discussed above, the opposite has been the case.

 

Pondering productivity

Terrifying topics, cool charts

A few days back I finished reading Nate Silver’s “The Signal and the noise: Why so many predictions fail”. The book was pretty good, albeit a bit repetitive. I most enjoyed the chapter on hurricanes.

In it, Silver discusses the relationship between the magnitudes of earthquake and the frequency of their occurrences.

Screen Shot 2017-01-05 at 20.02.01.png

Looks pretty unimpressive, but when we switch to a logarithmic scale of the annual frequency the relationship (and remember that the Richter scale is also a logarithmic scale) the relationship looks impressively linear:

screen-shot-2017-01-05-at-20-02-09

 

Silver argues that such a relationship holds regardless of the region considered – if we narrow down the sample, we’ll have fewer data points but something that looks vaguely linear as well.

I find this pretty remarkable. While we are still far from being to tell which city or region is “due” for an earthquake, we can estimate how likely – given historical events – is an earthquake of a given magnitude.

Silverman then argues that this relationship also holds when it comes to terrorist attacks and presents a similar-looking chart for terrorist incidents in NATO countries, between 1979 and 2009.

screen-shot-2017-01-05-at-20-12-53

That I found even harder to believe. Admittedly – the relationship looks slightly less linear (if one can even talk about degrees of linearity). Intriguingly, he decided to drop terrorist attacks with fewer than 5 fatalities, which I found quite surprising, but I figured it just made the chart prettier.

I therefore decided to test this theory and attempt this chart by myself. I thought I would focus exclusively on European countries. I downloaded data on all terrorist incidents from 1970 until 2015 from the Global Terrorism Database. I only included those in which at least 2 people were killed.

Some lines of code later, I was amazed at the result:

Picture1.jpg

An even better fit than the original chart!

When I repeated the exercise only for Western Europe, the picture looked similar, though there were more deviations.

This made me wonder what other variables this relationship holds for.

In any case, how useful is this for forecasting (the main topic of the book)? Useful, if we think the past will be more or less similar to the future. In my view, as scary as the chart is, it is also rather consoling. It points to relatively low probability of terrific death tolls, or at least very very few incidents with very very large death tolls. I really do hope the chart is right, but I also wish there were not enough data points to make it in the first place.

Terrifying topics, cool charts