Asset Write-Downs, a NOPAT Adjustment

Asset write-downs are nothing short of shareholder destruction. If wealth, as I contend, tends toward destruction in the long run, an investor has two choices: not to invest, or, to do something that shifts the odds in one’s favour. Not taking into account management’s destruction of shareholder value is not, one would imagine, an example of tilting the odds in one’s favour. One must always make a rigorous examination of the footnotes and management discussion & analysis (MD&A) of company reports, and make the appropriate adjustments to its reported data in order to determine the true state of the economics of a business and find economic activities such as asset write-downs.

Asset write-downs occur when the fair value of an asset, what it would fetch on the market, falls below its carrying value, the cost of an asset less accumulated depreciation or amortisation, forcing the book value, what appears on the balance sheet, to be written down, perhaps even completely, to its fair value. Write-downs and write-offs are important signals to debt investors of the deteriorating quality of collateral, but in sending those signals, they muffle another: the historical invested capital put into the business by its shareholders. So, a firm’s return on invested capital (ROIC) rises because invested capital, the denominator in ROIC, is shrunk after a write-down. In effect, this penalises firms with no write-downs when compared with firms with write-downs. The thing to do is to treat asset impairments as non-operating, adding their cumulative after-tax value to invested capital. The after-tax write-down expense should also be added to NOPAT. Given that firms receive a deferred tax benefit from writing down an asset, I include the after-tax impact of an asset-write down in my models. 

The most common forms of write-downs are goodwill and intangible impairments, and write-downs of plant, property, and equipment (PP&E) and other long-lived assets. An example of an asset write-down is the $2.43 billion in write-downs that Meta Platforms reported on page 100 of its 2023 10-K

David Trainer, of the revolutionary investment research firm, New Constructs, remarked in a report on asset write-downs that,

Given that management is paid to create value, not destroy it, an asset write-down represents management’s failure to allocate capital effectively.

“Red Flag: Asset Write-Downs Reveal Risk” by David Trainer

Operating, Variable and Not-Yet Commenced Leases, and their NOPAT and Invested Capital Adjustments

The reader should read this post in conjunction with a Google spreadsheet I have created on Meta Platforms’ operating, variable, and not-yet commenced leases. If wealth, as I contend, tends toward destruction in the long run, an investor has two choices: not to invest, or, to do something that shifts the odds in one’s favour. One of these things is to make a rigorous examination of a company’s reports, and make the appropriate adjustments to understand the true state of the economics of a business. This post discusses operating leases and their impact on my calculation of both NOPAT and invested capital.

Operating Leases

Since 2019, the International Accounting Standards Board’s (IASB) IFRS 16, “Leases,” and the Financial Accounting Standards Board’s (FASB) Accounting Standards Update (ASU) 2016-02, “Leases” (Topic 842), have been in effect, obliging companies to capitalise all their operating leases, which are contractual obligations under which a firm uses an asset without owning it, unlike finance leases, which are a form of debt that bestows ownership benefits to the leasee. Prior to these changes, operating leases and the value of the leased asset were not recorded on the balance sheet, and only a rental expense was recorded on the income statement, disguising the indebtedness of the business and scale of assets operated, giving them a more hallowed appearance compared to firms that had bought assets using debt, who had to record the value of the debt incurred and the asset bought on its balance sheet, and the related interest expense and depreciation on the income statement.

On the balance sheet, firms have to record the present value of the operating lease payments, or lease liability, as a single line item, or bundle it with other current and long-term liabilities; and a right-of-use asset line item, as shown in the except from Meta Platforms’ 1Q 2024 10-Q below and which represents the firm’s right to use the underlying asset, or bundle that asset within property, plant and equipment (PP&E), or other assets. A firm may choose to record a single lease liability, bundling operating and finance leases, breaking them out in the notes.

On the income statement, under International Financial Reporting Standards (IFRS), operating lease payments are recorded within depreciation and interest expense, and under Generally Accepted Accounting Principles (GAAP), the entire lease expense, including embedded interest, is recorded under operating expenses such as cost of sales. 

Accounting statements filed before 2019 were not adjusted to reflect these new standards, and so I adjust them to include operating leases. This requires scouring the notes to find the future operating lease payments, adding the total undiscounted cash flows, and discounting them by a standardised cost of debt. Firms use an internally derived discount rate to capitalise their operating leases. There is a rate implicit in the lease (RIIL) which requires information that lessees typically do not have, and so, firms will usually use a collateralized incremental borrowing rate (IBR). Given that firms issue uncollateralised debt, calculating an estimate of collateralized IBR is a quite difficult thing to do. Using this rate, companies are able to estimate the present value of their operating lease obligations. In order to strip away the effect of idiosyncratic and perhaps adulterated assumptions that go into calculating collateralized IBR, I use the yield to maturity on AA-rated corporate debt as a discount rate across all US-domiciled firms, and an equivalent measure in other jurisdictions, reflecting the liquidity of operating leases. 

The present value of operating leases is added to my measure of invested capital. The implied interest is simply the present value of the operating lease obligations multiplied by the cost of debt. This will be deducted from my calculation of NOPAT to give an unlevered measure of core operating profitability. As it is a form of debt, in my calculation of shareholder value, I subtract operating lease liabilities from enterprise value.

For the post-2019 era, for the sake of consistency, I calculate operating lease liabilities and implied interest according to the method I have described, removing the operating lease debt from the balance sheet and replacing it with a total operating lease adjustment, which is the difference between my calculation of the present value of operating leases and the firm’s reported value. In instances such as the one below, taken from Meta Platforms’ 2023 10-K, where the operating lease cost is given in one line-item, I deduct my measure of implied interest expense from operating expense, and where the interest expense is disclosed, I may consider this measure in its stead.

Variable and Not-Yet Commenced Leases

The impact of variable and not-yet-commenced leases remains off the face of the financial statements, as discussed in the paper, Variable Leases Under ASC 842: First Evidence on Properties and Consequences. This is because operating leases need only be recorded on the balance sheet when a contract has been signed, payments have commenced and the firm has begun using the underlying asset. 

If leases have variable payments, they may be excluded from the operating lease calculation, given the supposed difficulty of reliably forecasting them due to changes in the economic activity tied to the leased asset. However, these leases can be rather large. In 2018, for example, variable leases were 48% of Delta’s total lease costs, 54% of American Airlines’, 69% for United Airlines, and 77% for Southwest Airlines’. 

Source: Delta Air Lines 2018 Annual Report

Moreover, under IFRS 16 and ASU 2016-02, firms are obliged to report on leases that have been agreed to, create material obligations, but have yet to commence. Like variable leases, not-yet commenced leases are not considered a part of operating lease obligations. For instance, Meta Platforms reported $7.07 billion in not-yet commenced operating leases, on page 111 of its 2023 10-K. Consequently, to have a fuller picture of the obligations firms are under,  include both variable and not-yet commenced leases in my calculation of a firm’s present value of future operating lease payments. 

Variable Lease Expense

To calculate the impact of variable leases, I multiply the standardised present value of operating leases by Variable Lease Expense/Operating Lease Expense. This number is then added to the present value of operating leases. For example, Meta Platforms reported $580 million in variable lease cost, on page 110 of its 2023 10-K, and an operating lease cost of $2.091 billion. This gives us a multiplier of 0.28, which multiplies my standardised present value of operating leases to calculate the present value of variable leases. This is added to my standardised present value of operating leases. Variable leases were responsible for 17.6% of Meta Platforms’ total operating leases. 

Not-Yet-Commenced Lease Expense

To calculate the impact of not-yet commenced leases, I multiply the standardised present value of operating leases by Not -Yet-Commenced/Total Lease Payments. This number is then added to the present value of operating leases. For example, we showed above that in 2023, Meta Platforms had $7.07 billion in not-yet commenced leases, which is divided by $23.649 billion in total lease payments, to give a multiplier of 0.30. This multiplier acts upon the standardised present value of operating leases to give a present value of not-yet commenced leases. This is also added to the standardised present value of operating leases. Not-yet commenced leases were responsible for 19% of Meta Platforms’ total operating leases. 

Uncertainty, Information and Digital Firms: a Framework for Understanding Meta Platforms and its Peers

What follows is a framework for understanding digital firms, a framework that emerged as I edited an article on Meta Platforms, an article to be posted next week Tuesday. There are three main elements to this framework: firstly, that competition is a discovery process; secondly, that transaction costs define the scope and limits of a firm; and thirdly, that where information is the fundamental attribute of an industry, that industry will tend toward increasing returns. These planks, if you like, are underpinned by a conviction that the economy is a complex organism. The consequence of these elements or planks, is that the Internet Revolution naturally resulted in the emergence of platforms, and aggregators, because it demanded a kind of firm that was not the centre of production, but the centre of a network.

Competition as a Discovery Process

Complexity economics views the economy as a system not necessarily in equilibrium, but rather as one where agents constantly change their actions and strategies in response to the outcomes they mutually create. It holds that computation as well as mathematics is useful in economics, that increasing as well as diminishing returns may be present in an economic situation, and that the economy is not something given and existing, but forms from a constantly developing set of actions, arrangements, and technological innovations. The economy is thus comprised of evolving networks of interacting agents, institutions, and technologies—networks of networks. The macro- level patterns of the economy—growth, innovation, business cycles, market booms and busts, inequality, and carbon emissions—then emerge from these dynamic micro- and meso-level interactions. From the complexity economics perspective, change is largely an endogenous phenomenon, not simply the result of unexplained shocks from outside the system.

“Complexity Economics: An Introduction”, by W. Brian Arthur, Eric D. Beinhocker, & Allison Stange

A consequence of taking a complexity approach to competition is that competition is seen as a multi-level process, in which firms compete and relate with other firms within an industry, who, given the tendency of wealth toward destruction, seek to survive over the long-run and grow in the short run. Within markets, firms maximise their profits and compete for market share by providing sustainable products and services. Competition between firms can also be described in Coasian language as competition between intra-firm and inter-firm organisation, between whether economic activities should be done within the firm, or by the market. Within firms, individuals, units and divisions compete and cooperate to maximise their individual payoffs. Nicolas Petit and Thibault Schrepel call these the macro, meso and micro levels of competition. Competition at the industry level forces changes within firms that result in a firm facing new competitors at the market level. Concretely, by way of example, each of Meta Platforms divisions compete and cooperate over resources, and at the market level, Meta enjoys a monopoly in social media networks, but faces fierce competition at the industry level, where it is part of the Attention Economy. (While preparing this article for publication, I received Ben Thompson’s latest Stratechery blog post, “Meta and Open”, which makes a similar point). The emergence of TikTok at the industry level forced changes in Instagram, by way of Reels, which triggered an evolution from a chronological feed of content surfaced from one’s social network to algorithmically sorted content from the universe of all Instagram users. 

Competition exists because firms, and in general, economic agents, have imperfect information. The irreducible complexity of the economy makes prediction a fraught exercise and veils the future with uncertainty. Contrary to the dominant neoclassical economic paradigm, if firms had perfect information, where one knows all the relevant facts to make a decision, as happens under perfect competition, they would not compete, it is the veil of uncertainty thrown over economic activity that makes competition necessary. The greater the uncertainty, the greater the competition. Competition is a learning process in response to ill-defined situations. In fact, in 1975, when Leonid Kantorovich won the Nobel Prize, it was for work that demonstrated that central planning could work given perfect information. Rather than a precondition for perfect competition, perfect information is a precondition for central planning. 
With knowledge diffused across the market, management’s primary problem is not the allocation of given capital, but the utilisation of incomplete knowledge. The economy as a whole is an economy of economic agents who respond to ill-defined situations by, in the words of Arthur, “‘making sense’ or recognizing some aspects of them, and choosing their actions, strategies or forecasts accordingly”. Everything is contested and uncertain, from the optimum mix of planning and market transactions, to the best combination and use of resources. The process of gathering, interpreting and making use of knowledge is a constant struggle. In a lecture titled, “Competition as a Process of Discovery”, F.A. Hayek explained that,

…it is salutary to remember that, wherever the use of competition can be rationally justified, it is on the ground that we do not know in advance the facts that determine the actions of competitors. In sports or in examinations, no less than in the award of government contracts or of prizes for poetry, it would clearly be pointless to arrange for competition, if we were certain beforehand who would do best.

“Competition as a Process of Discovery”, by F.A. Hayek

Since Adam Smith wrote The Wealth of Nations, economists have known that prices, like an “invisible hand”, coordinate the actions of economic agents in systems in which knowledge is widely distributed, so that supply adjusts to demand and production to consumption. The price of a thing contains the relevant signals for action, even if few of the market’s actors can fathom the whys and the wherefores. The market is an organism wherein the price mechanism constantly collects, assesses, distils and transmits relevant facts to the market’s actors. By responding to price, competition fosters information discovery and the organisation of the market. The market, a complex adaptive system, achieves spontaneous order, an order created by unwitting firms who are constantly adapting to each other’s behaviour and to changes in the order they create, in a recursive loop. 

Uncertainty over existing and possible states, creates costs of identifying partners and opportunities, writing and executing contracts, determining whether production costs are lower within the firm or outside of it, all the while trying to reduce the likelihood of a permanent impairment of capital. Not only does increasing uncertainty lead to increasing competition, but increasing competition increases uncertainty, parallel to which, rising uncertainty increases the economy’s complexity and rising complexity increases uncertainty. 

Firms respond to uncertainty through a range of actions from innovation, diversification and exploration, to trying to freeze the market structure through collusion, lock-in, erecting barriers to entry, and other such actions.

Transaction Costs and the Firm

Firms are planned economies. Where in the market, the factors of production are superintended by the price mechanism, within the firm, entrepreneurial planning supersedes it. The triumph of capitalism over the last five hundred or so years has demonstrated the wondrous ability of markets to create unparalleled wealth, and yet, firms exist. Ronald Coase in his paper, “The Nature of the Firm”, sought an explanation for this. “Why do firms exist” is the sort of simple question out of which great truths emerge, and it is a question that was especially important because, as Coase pointed out in his Nobel Prize lecture, “most resources in a modern economic system are employed within firms”, where the allocation of resources depends “on administrative decisions and not directly on the operation of a market”. The answer, he found, lay with transaction costs: firms exist because the price mechanism is not costless, it carries transaction costs such as the cost of discovering the relevant prices of factors of production, the cost of negotiating and concluding contracts for each transaction, and the cost to making long-term forecasts about the deployment of factors of production. These transaction costs are the children of uncertainty; absent uncertainty, firms cannot exist because there is no need to try and submerge these costs down by replacing them with intra-firm costs. 

Nevertheless firms cannot keep growing and replacing the market’s transaction costs with its own intra-firm costs. The limits to a firm’s horizontal or vertical integration are the decreasing returns to scale that that firm must experience at some point, until the costs of organising a transaction are equivalent to having that transaction done through the market. There may also be rising supply prices for factors of production such that it is cheaper to produce in a smaller firm than to continue expanding. There are also a class of transactions that are simply too costly for a firm to manage on its own and which are always better managed by the price mechanism. Given the irreducible uncertainty inherent in economic activity, it is also true that firms are liable to engage in value destroying activities. All firms must wrestle with the fact that wealth, in the long run, is far more likely to be destroyed than created.

Coase concluded that,

Other things being equal, therefore, a firm will tend to be larger:

(a) the less the costs of organising and the slower these costs rise with an increase in the transactions organised.

(b) the less likely the entrepreneur is to make mistakes and the smaller the increase in mistakes with an increase in the transactions organised.

(c) the greater the lowering (or the less the rise) in the supply price of factors of production to firms of larger size.

“The Nature of the Firm”, Ronald Coase

With the launch of the transaction cost approach, Coase established transaction costs as the basic unit of analysis of firms, a unit that successfully explained why firms exist, what the limits to their size are, and the sort of market structures that can emerge. Oliver E. Williamson, reflecting on the approach, divined three levels of analysis: one that takes firm size as given and studies how the operating units relate to each other; one that seeks to determine the “efficient boundary” separating firm and market, which is to say, what activities should be conducted within and without the organisation; and one that assess how human assets are used. 

By succeeding, firms are able to create value, which is to say that they grow their revenues and earn economic profits. By earning economic profits, defined as a return on invested capital (ROIC) in excess of the opportunity cost, scaled by invested capital, incumbents attract competition from entrants greedy for the economic profits on offer. In Edward Chancellor’s examination of Marathon Asset Management’s capital cycle framework, Capital Returns, he expresses this idea beautifully, saying,

Typically, capital is attracted into high-return businesses and leaves when returns fall below the cost of capital. This process is not static, but cyclical – there is constant flux. The inflow of capital leads to new investment, which over time increases capacity in the sector and eventually pushes down returns. Conversely, when returns are low, capital exits and capacity is reduced; over time, then, profitability recovers. From the perspective of the wider economy, this cycle resembles Schumpeter’s process of “creative destruction” – as the function of the bust, which follows the boom, is to clear away the misallocation of capital that has occurred during the upswing.

Capital Returns, by Edward Chancellor

It is commonly believed that the chief purpose of strategy is to make probabilistic bets to build, defend, and extend incumbent competitive advantages, which, as Bruce C. Greenwald and Judd Kahn show, are synonymous with barriers to entry. Barriers to entry are costs faced by potential entrants but not by incumbents. Without mounting significant barriers to entry, economic profits are vulnerable to being competed away. Yet, it is truer to say that firms seek first to reduce transaction costs, accepting trade-offs between production cost economies, where the market holds certain competitive advantages, and governance cost economies, where internal organisation holds certain competitive advantages. In terms of market structure, this is done through discovery of the optimal configuration of the value chain, either through horizontal or vertical integration of the value chain. Where the goal of vertical integration is reducing the costs of distribution, a company integrates forwards, integrating distribution centres and retailers selling its products. Where the goal of vertical integration is reducing the costs of production, a firm integrates backwards, integrating suppliers. Where the company aims to reduce both costs of production and distribution, it practises a balanced form of vertical integration. Whichever form of integration firms pursue, they are motivated by an economising imperative to reduce transaction costs.

Information, Market Structure, and Digital Firms

The impact of the internet on the economy has been to decouple matter from information and expand the production possibility frontier. New markets, industries and economic sectors have emerged and continue to emerge, as capital and entrepreneurial planning have launched toward technological infrastructure. Firms have and continue to develop new ways of capturing value, and new forms of economic transactions have emerged and continue to emerge. The possibilities flung open by the internet have also created new forms of firms, firm-types which enjoy increasing returns -the tendency of what is advantaged to gain further advantage and what is disadvantaged to to be further disadvantaged-, and which are compelled to enhance consumer welfare. The Internet Revolution is the most consequential economic transformation of the world since Johannes Guttenberg developed modern movable type printing in 1440. 

Traditionally, under a monopoly, the monopolist sets the price for all, and, driven by profit-maximisation, tends to seek market prices higher than would be obtained under other market structures, while eroding the quality of goods and services it provides. Profit maximisation leads the monopolist to set a price greater than the marginal cost and which equals marginal revenue. In doing so, the monopolist does not serve consumers who value their goods and services at less than the market price, creating a deadweight loss, which refers to potential gains that are unearned by either the monopolist or the consumers. As such, deadweight loss represents the market’s inefficiency, and society’s loss due to the monopoly structure.

In a world of scarce supply and significant marginal costs, the monopolist’s pursuit of a producer surplus is at odds with the consumer’s pursuit of a surplus, and, given the power of the monopolist, the producer surplus is enlarged and the consumer surplus reduced and prone to decline. Consequently, as Lina M. Khan noted in her note, “Amazon’s Antitrust Paradox”, antitrust law has focused on “the short-term interests of consumers, not producers or the health of the market as a whole; antitrust doctrine views low consumer prices, alone, to be evidence of sound competition.”  Of course, it is obvious, as Khan, Thompson and others have observed, that when marginal cost is equal to zero, then all that exists is consumer surplus, that, rather than degradation of services and exploitation of consumers, there is the enhancement of consumer welfare. While Khan is correct in observing that,  “gauging real competition in the twenty-first century marketplace—especially in the case of online platforms—requires analysing the underlying structure and dynamics of markets (…) a company’s power and the potential anticompetitive nature of that power cannot be fully understood without looking to the structure of a business and the structural role it plays in markets”, the underlying assumption of the curse of bigness, betrays a widespread failure to appreciate the uniqueness of digital firms. 

In 1994, Kenneth J. Arrow realised that information is “almost the exclusive basis for value” in digital firms and digital goods, a fact that has impacted the utility of old theories of value. There and in a 1996 paper, Arrow saw that in the prior thirty years, standard economics had wrought a rich theory of asymmetric information, where one party faces costs to obtain information, assumed to be scarce, that the other party cannot. Yet, where Shannon’s communication theory, and decision theory have developed powerful analyses of information as a random variable, economics then and even now, has a paucity of research on information as a choice variable. Information is an economic good like other commodities, being costly and valuable, however, information is special because it generates increasing returns. Arthur, in his remarkable book, Increasing Returns and Path Dependence in the Economy, began it by saying that,

Conventional economic theory is built on the assumption of diminishing returns. Economic actions engender a negative feedback that leads to a predictable equilibrium for prices and market shares. Such feedback tends to stabilise the economy because any major changes will be offset by the very reactions they generate. The high oil prices of the 1970s encouraged energy conservation and increased oil exploration, precipitating a predictable drop in prices by the early 1980s. According to conventional theory, the equilibrium marks the “best” outcome possible under the circumstances: the most efficient use and allocation of resources.

Increasing Returns and Path Dependence in the Economy, by W. Brian Arthur

Although there are examples of how information leads to increasing returns to scale, such as with Adam Smith’s theory of the benefits of the division of labour, much of standard economics is blind to information, with Arrow pointing to the “analytic difficulties which would follow introducing information as an economic variable”, adding that

Increasing returns can occur for other reasons than information. But with information, constant returns are impossible. Two tons of steel can be used as an input to produce more than one ton of steel in a given productive activity. But repeating a given piece of information adds nothing. On the other hand, the same piece of information can be used over and over again, by the same or a different producer. This means both that the way information enters the production function is different than the way other goods do and that property rights to information take on a different form. These remarks are obvious enough, but their implications are not. 

To elaborate the point, the usual logic of the price system depends on constant returns. For conventional inputs, the buyer can buy more or less at a given price (or at least close to it if there are elements of monopoly). But information is different. Technical information needed for production is used once and for all. The same information is used regardless of the scale of production. Hence, there is an extreme form of increasing returns.

“Technical Information and Industrial Structure”, by Kenneth J. Arrow

Information as the central economic good of a market results in markets defined by positive, rather than negative feedback, and this has clear effects on the kinds of structures and dynamics that emerge. Arthur gave the following example:

The history of the videocassette recorder furnishes a simple example of positive feedback. The VCR market started out with two competing formats selling at about the same price: VHS and Beta. Each format could realise increasing returns as its market share increased: large numbers of VHS recorders would encourage video outlets to stock more pre-recorded tapes in VHS format, thereby enhancing the value of owning a VHS recorder and leading more people to buy one. (The same would, of course, be true for Beta-format players.) In this way, a small gain in market share would improve the competitive position of one system and help it further increase its lead.

Such a market is initially unstable. Both systems were introduced at about the same time and so began with roughly equal market shares; those shares fluctuated early on because of external circumstance, “luck,” and corporate manoeuvring. Increasing returns on early gains eventually tilted the competition toward VHS: it accumulated enough of an advantage to take virtually the entire VCR market. Yet it would have been impossible at the outset of the competition to say which system would win, which of the two possible equilibria would be selected. Furthermore, if the claim that Beta was technically superior is true, then the market’s choice did not represent the best economic outcome.

Increasing Returns and Path Dependence in the Economy, by W. Brian Arthur

Although some parts of the resource-based economy are subject to increasing returns, resource-based economies are largely defined by diminishing returns. It is in the knowledge economy where increasing returns are definitive. Importantly, whereas negative feedback loops exert a gravitational pull toward the initial equilibrium, and hinder the emergence of a leader, positive feedback loops, push the market away from the initial equilibrium, fostering competition, technological change, innovation, and new business models, and, in the long run, may reduce uncertainty through winner-take-all effects, market tipping and path dependence. The nature of an industry’s returns, then, determines whether its structure is monopolistic, a monopoly, or oligopolistic. In the presence of increasing returns, monopolies and oligopolies are the natural outcomes, because, not only do firms gain in efficiencies as they grow larger, but consumers gain in utility the more they consume. By dint of merit or accident, consumption agglomerates around the happy few firms who are anointed winners. 

Indeed, winners may emerge for random, contingent reasons that benefit inferior products or services, such as QWERTY’s victory over  Dvorak Simplified keyboards, even as lock-in may emerge because a product or service is superior, such as Apple’s suite of integrated products. This randomness makes it hard to predict winners and losers and the knock-on effects of such success. Did the authors of the Section 230 of the 1996 Communications Decency Act foresee the rise of online “town squares” such as Twitter, Facebook, and Instagram, or that such town squares would become battlegrounds over what is true, and whether false or even harmful statements should be struck down? In 2007, Facebook launched Facebook Platform, with Zuckerberg asserting that, “Right now, social networks are closed platforms. And today, we’re going to end that”, and yet, rather than develop a platform that could go toe-to-toe with Apple and Google, the whole effort was a gigantic, missed opportunity. It was mobile, a technology outside Facebook’s control, that turned Facebook into such a juggernaut that in his first earnings call in 2012, Zuckerberg could claim that, “Mobile is a huge opportunity for Facebook. Our goal is to connect everyone in the world”. 

Information also implies that the owner can allow others to use it without ceding ownership, and without destroying that information. Given that reproduction of information is cheaper than its production, intellectual property rights exist to create artificial scarcities of information to motivate firms to acquire information. Nevertheless, information is diffused, through labour mobility, through markets, publication, and informal, interpersonal contacts. Information’s tendency to cheap or even costless diffusion makes it difficult to turn it into a property, information wants, as it were, to be free, it is a fugitive resource, overlapping firms, in part because of the limited mobility of a firm’s workers, resulting in “an increasing tension between legal relations and fundamental economic determinants”, to quote Arrow.

Platforms and Aggregators

Consider the following thought experiment:

In a village of one thousand people, there is one print newspaper, in which two journalists work. Computers, mobile phones, and the internet have not yet been invented. In that village, aside from those journalists, there are three hundred people who would, if given the opportunity, write for the rest of the village to read. In this village, it is only possible to publish through the newspaper, and so, only these two happy journalists write and publish anything. The costs of competing with the newspaper are prohibitive. In fact, the newspaper only exists thanks to the largesse of a wealthy retiree who settled there. As time passes, a gale of creative destruction passes over the village, ushering in computers, mobile phones, and the internet. Suddenly, anyone can publish! Bad poems, misjudged and angry articles, and thoughtful, perhaps even moving content. The cost is nothing, so near zero one might as well call it zero. It costs nothing to post, nothing to share. Suddenly, our newspaper of two journalists is faced with three hundred competing writers. Even if the majority of the content produced by these intrepid three hundred is bad, the law of large numbers tells us that this group will produce more quality content than the newspapers. The newspaper cannot compete because its costs of competition go up: the only way to compete is by hiring three hundred writers! The market has no costs to compete, the newspaper’s costs mount. Supposing this newspaper was led by a young man named Mark Zuckmayer, who realised that there was more value being created outside the firm compared to within, and that the costs of production were cheaper outside the firm compared to within, surely he would say, “The Internet Revolution forces me to push production outside the firm, to the market, where it is cheaper to produce. The trick is to capture that value!” This shape-shifting business, a unicorn among firms, would transform itself from a centre of production, to a kind of coordinator of market activity. This is the story of the Internet Revolution. It is not just that transaction costs tumbled toward zero, it is that it necessitated a new type of firm, a firm that was not a centre of production, but the centre of a network.

While the pre-internet era was defined by scarce supply, abundant consumers and users, and significant transaction costs, the Internet Revolution ushered in a special sort of multi-sided platform company, who not only matched supply and demand providing each with network benefits in traditional ways, the way credit cards match cardholders with merchants, or the yellow pages matched advertisers and consumers, but who boasted abundant supply, abundant consumers and users, and, crucially, enjoyed zero transaction costs. The key “event” of the Internet Revolution, the end of transaction costs, and its corollary, mounting transaction costs for legacy incumbents, turned a millenia old business model into the most profitable business model in the history of the world. For such platform companies or  “matchmakers” demand is the appropriate lens of analysis. In a world of no transaction costs, serving the world from Day One is achievable, and the battle is over scalability, which rests upon a business’ ability to deliver the best available user experience. Enhancing consumer welfare is the inevitable and singular goal of the business. 

Alongside these platforms, were what Thompson describes as an “aggregator”: a firm that has all of the following qualities: a direct relationship with its users; zero marginal costs for serving them; and a demand-driven multi-sided network with decreasing acquisition costs. He calls them “aggregators” because they “aggregate modularized suppliers — which they often don’t pay for — to consumers/users with whom they have an exclusive relationship at scale”. In his discussion of “Aggregator Theory”, Thompson describes how in the pre-internet era, value creation depended upon establishing horizontal dominance, as a monopoly or as part of an oligopoly, in one of the three links in the value chain, supply, distribution, and consumers and users; or, by integrating backwards into supply to offer a vertical solution. For printed newspapers, for instance, advertisers were modularised, and supply, in the form of content they created, was integrated with advertisements, which they delivered to their readers within some locale. Vertical solutions largely depended on controlling distribution and leveraging relationships with suppliers. In the internet era, where distribution is free, transaction costs are near-zero, and the addressable market is the entire globe, value creation depends upon controlling and scaling the user relationship, and supply is both commoditised and gargantuan. The more users there are, the greater the value users enjoy, and the more attractive the aggregator is for suppliers, creating a virtuous cycle. The network’s winner-take-all-effects is the prize aggregators compete for, and to win it, companies do not try to control scarce resources, but to control demand for them. 
Meta, the subject of my next blog post, is not merely an aggregator, it is a “super aggregator”, sharing that title with just one other company: Alphabet. As super aggregators, not only do Alphabet and Meta have near-zero transaction costs for serving their end users, they also enjoy near-zero transaction costs with respect to both suppliers and advertisers. For Meta, its suppliers are its users, who provide content freely, for which Meta has exclusive access. Where publishers in the pre-internet era integrated content and advertisements, as an aggregator, it has modularized advertisements, integrating its ad inventory and profile data in order to programmatically deliver finely targeted ads through an advertising network that directly match advertisers and adverts to their most probable customers. By commoditising and modularising pre-internet publishers’ integration of advertisers and content, Meta has been able to use its integration of advertisers and adverts to earn attractive economic profits. As Clayton Christensen said in The Innovator’s Solution, as he developed what he first called the “law of conservation of attractive profits”, and later, the law of conservation of modularity:

Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and de-commoditization, that exists in order to optimise the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage. 

The Innovator’s Solution, by Clayton Christensen

The Nature of Risk

The Knightian Consensus 

… Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term ‘risk,’ as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organisation, are categorically different. … The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. … It will appear that a measurable uncertainty, or “risk” proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We … accordingly restrict the term “uncertainty” to cases of the non-quantitative type.:

Frank Knight

Wealth Marches Toward Its Destruction

It can be said that Knight does not so much as define risk and uncertainty, as state whether they can be measured or not, but in doing so, he opens the door to the ridiculous, to what are now known as “downside” and “upside” risks. 

Since Harry Markowitz’s seminal 1952 paper, “Portfolio Selection”, it is generally accepted that risk is best measured in terms of a “volatility of returns”, in other words, the upward and downward swings in prices. Under this measure, if one expects a return of 10% from an investment, both the chance that the return will be lower, or higher than that expected return, are classed as risk! Few investors and managers, if any, would accept this view. Just eighteen years after Markowitz’ paper, one survey found that, across eight industries, most managers said they believed that semivariance, a measure of “downside risk”, was a more plausible measure of risk than variance. Decades later, that unease with theory remains. This notion of risk clearly goes against our intuition. One does not say, “There is a risk I will make a return greater than expected on this investment”. This notion of risk is also self-contradictory: it would seem perverse to imagine as rational behaviour a situation in which a person sought to limit all their risk, that is, both downside and “upside” risk. Defenders of the position would claim that when they refer to risk, they are of course referring to downside risk, which begs the question why the scope of this definition of “risk” allows for “upside risk” to be defined as such. It is not a merely academic argument: investors and managers make decisions based on risk measures that imply this very logic. It seems evident that whereas a person would want to limit their downside risk, they would be happy to have their profits run far in excess of what they expected. 

The idea of risk as composed of upside risk and downside risk is a very agnostic view of risk, like a man who does not know if he wants to turn left or right but zigs and zags. Downside risk is the only legitimate risk. Even Markowitz could not defend his risk measure from a logical point of view. The great man was aware of the inherent absurdity of so catholic a notion of risk, saying that semi-variance was a “more plausible measure of risk”. Seventy years since Markowitz launched modern portfolio theory, this notion of agnostic view of risk is pervasive, partly because Markowitz himself continued to advance variance as a measure of risk, arguing that practitioners needed to become more familiar with the simpler measure of variance, before they could advance to the more plausible notion of risk. 

The economist, A.D. Roy’s criticism of risk theory as being “set against a background of ease and safety”, rather than “poorly chartered waters” or hostile jungles, assumes “economic survival”, and thus incapable of understanding why investors act as if they see disaster everywhere, still rings true. 

Properly understood, risk should be viewed myopically, as being primarily the possibility of incurring a loss (the moment and existence of loss are more important than the knowledge of it), and secondarily, the possibility of underperforming some target rate of return. By loss is meant a decay or decline in prior wealth, where wealth represents one’s assets, a part of which is risked in some venture. This myopic view of risk becomes even more important in light of advances in behavioural economics in recent decades. Daniel Kahneman and Amos Tversky found that people are more fearful of losses than they are desirous of making corresponding gains, something they called, “loss aversion”. Loss aversion is often treated as irrational, and managers have been encouraged to overcome their loss aversion in search of superior returns. By way of example, in Applied Corporate Finance, Aswath Damodaran makes that argument, and so too do Tim Koller, Marc Goedhart, and David Wessels in their textbook, Valuation. However, if risk is defined not just by the question of its tractability, but also by the possibility of loss, then loss aversion is a rationally obligatory response, whereas, if risk is merely a forecasting error about a range of outcomes, good and bad, then, as prospect theory argues, and as managers are taught to believe, loss aversion is an irrational bias that should be overcome. 

Centuries ago, the Swiss mathematician, Daniel Bernoulli proved that wealth compounds, is “multiplicative” to use the more technical and uglier wording, and that, in terms of risk, tends toward its own destruction. This is the most likely long-term outcome for most capital allocators, which leads naturally to the conclusion that in reducing risk, we increase our possibility of building wealth. This is, of course, at odds with the, “no reward without risk” paradigm that is in such wide currency. Yet, logically, if risk is primarily defined by the likelihood of loss, then piling risk on top of more risk should surely lead to a portfolio’s destruction. 

Losses impact portfolios more profoundly than corresponding gains. In truth, although perhaps controversial, this can be shown with very elementary arithmetic: a decay in wealth of 10% requires an 11.11% gain just for an investor to break even, while a 20% decline requires a 25% gain, a 50% decline demands a 100% gain, and a 99% decline requires a miracle. As losses mount, the gains needed just to break even escalate asymmetrically. Whether these losses come in one fell swoop, or in dribs and drabs over time, the demands on a portfolio to earn asymmetrically greater returns soar. Managers and investors are rationally obliged to avoid risk, investing only when they have an edge, and, when doing so, investing in concentrated portfolios, while diversifying across time. 

Losing money is the dominant state of most investors and businesses. In his 2018 paper, “Do Stock Outperform Treasury Bills?”, Hendrik Bessembinder observed that just 4% of publicly traded companies in the CRSP database account for the net gain for the U.S. stock market between 1926 and 2018, with the other stocks having returns equal to those of Treasury bills. More precisely, 96% of stocks, if held across their lifetime, cannot match the returns of Treasury bills. Wealth destruction is the order of the day.

Measurable and Uncertain

The Subjectivity of Risk

The view that probability is measurable, and therefore objective, is driven by the Industrial Revolution and the needs of factory managers to optimise their processes. The Student’s t-test, for instance, came from the work of William S. Gossett, a brewer at Guiness in the late-nineteenth century. At present, the dominant paradigm in probability theory is the frequentist interpretation, in which the probability of some outcome is the limit of its relative frequency of occurrence over repeated trials under similar conditions. In that framing, if, for instance, a bottle-maker faces a risk of 0.5% of 1 of 10000 bottles breaking, that risk is determined by observations of the bottle making process. However, this view of probability does not have a universal application: not all probabilities are a function of the physical properties of the phenomenon being studied. The most typical thought experiment that is often presented involves either a die or coin, where the chances of a side turning up is a function of the physical properties of the die or coin.

The most significant issue with the frequentist interpretation is that these probabilities are somewhat circular and subjective. Take the chance of an earthquake: P.B. Stark and D.A. Freedman argue that it is not possible to actually have a large number of trials to forecast the relative frequency of earthquakes within the next thirty years. Those “trials” are not laboratory experiments. This is even more significant when one considers that earthquakes are rare, unlike changes in the weather, with recurrence over periods of hundreds of years. They hold that probability should be seen as a property of a mathematical model that seeks to describe features of the natural world, rather than an expression of the natural world. In doing so, one is forced to accept that risk cannot be so easily segregated from uncertainty, because no mathematical model can perfectly express the natural world. Rather, mathematical models are founded upon assumptions made by the modeller, assumptions which cannot be tested. For investors and managers, this is especially true: risk measures are not the natural expression of the market, they are constructs, and thus, the measures are not pure objective creatures, instead, they commingle our subjective ideas with objective facts, and so, they are less reliable than is often discussed. Uncertainty is a feature of risk, not a separate category. 

Not only are many choices made within a mathematical model untestable, they may even seem arbitrary. For instance, Markowtiz’s decision to use “volatility of returns” of returns was guided, not by the logic of risk, but, as he detailed in 1959 book, Portfolio Selection, by fears that the computing power available at the time did not allow him to calculate the more plausible semivariance. 

The modelling decisions that one makes, the decision to use one risk measure over another, or the credence given to one theory over another, are all examples of areas in which one is forced to weigh evidence without recourse to some final test that says what is true, or more likely. The Knightian segregation of risk and uncertainty overstates the gulf between the two, and leads to overconfidence about risk measures. There are unquantifiable risks that are distinct from uncertainty that managers face. For example, investors have a smorgasbord of risk measures to use, but the risk of selecting the wrong risk measure cannot be measured. A theory, concept or argument may be wrong, even if the underlying data is perfect, and there is no way to arrive at a probability that one argument is more likely than another. These unquantifiable risks are, nevertheless, tractable within the framework of plausible reasoning.  The nature of these risks is that they lead to only partial entailment or partial belief, which is to say, uncertainty is implicated in their formation. Thus, we are compelled to embrace uncertainty as a component of risk, and the immeasurability of aspects of risk.

Finally, risk is subjective not simply from an ontological point of view, but from a positional perspective with a trade. It matters whether one is buying or selling an asset, because the risks are opposed. Take Warren Buffett’s April 2020 decision to sell Berkshire Hathaway’s holdings in Delta Air Lines (DAL), United Airlines (UAL), American Airlines (AAL) and Southwest Airlines (LUV), realising a loss of $5 billion, having bought them for $9.3 billion between mid-2016 and early 2017. Between 2019 when Berkshire Hathaway invested in the airlines, and the decision to sell, American Airlines fell 62.9%, Delta Air Lines fell 58.7%, Southwest Airlines fell 45.8%, and United Airlines fell 69.7%, for an average decline of 59.28%, assuming equal-weights for the sake of simplicity. 

Selling these stocks seemed at odds with Buffett’s philosophy of striking in times of fear, and of the diminishing risk entailed in falling valuations. Had the man who scoffed at volatility as a measure of risk lost his nerve at the moment of maximum volatility? Surely it was obvious to the far-sighted investor that airlines would fly again? Buffett’s decision was placed in harsher light when, within a year, American Airlines and Southwest Airlines were both up more than 80%, and United Airlines and Delta Air Lines were up around 70%. An investor who bought at the May 25, 2020 bottom would have earned a 200% return from United Airlines, and a 190% return from American Airlines. Results are their own defence, and they seemed to damn a man considered by many as the greatest investor ever. 

However, simple arithmetic reveals the subjectivity of risk, that it matters whether one is buying or selling, and explodes the notion that Buffett’s decision to sell was imprudent: having experienced a decline of 59.28% in its airline portfolio, Berkshire Hathaway needed a gain of 245.55% just to break even, which is to say, the disutility or pain of loss, is greater than the utility or usefulness of a gain. Hope may have feathers, but that is no reason for the prudent investor to let it perch on the soul. A decision to stay the course would, in effect, be a bet that not only would airlines enjoy staggering returns, but that the post-pandemic climate for airlines would be markedly better than the pre-pandemic climate.

Whereas the brave investors who plunged into airline stocks around the bottom were rewarded for their cool daring, for a time, those who held on have been punished. Between the start of 2020 and the time of writing, Southwest Airlines’ stock has declined 56.5%, American Airlines’ stock has declined 56.63%, Delta Air Lines’ stock has declined 41.77%, and United Airlines’ stock has declined 54.83%. Assuming equal weights, that is an average decline of 52.43%. The patient investor in airlines who, having bravely held on all these years, is in need of a miracle. In the long run, losses define investing, and investors and managers, possessing flawed ideas on risk, stack the odds against themselves. 

Scroll to Top