China’s anti-trust authorities march to the beat of their own drum

As China has become a major global economy and grows more assertive on the global stage, the country has discovered the power of anti-trust legislation. While created on three common pillars of fighting anti-competitive agreements between companies, the abuse of a dominant position, and mergers that may eliminate or restrict competition, the implementation is increasingly different. There have been hundreds of cases where Chinese authorities have looked at mergers between Chinese companies, and not one has been objectionable to the authorities. But if it doesn’t matter that two separate companies are owned by the state or merged, how can a merger between state-owned businesses be anti-competitive?

In a state capitalist system, as we have now, Communist Party groups are part of every company, including private domestic or international joint-ventures and all foreign investment is in the shape of a joint-venture with a Chinese partner with three or more employees. While they have long-established formal power in state owned enterprises (SOE), for joint ventures there is increasing pressure to allow party groups to approve all critical matters before they are presented to the board based on the 2017 Communist Party Directive entitled “Notice about firmly promoting writing SOE party building work into company articles of association.” Following this logic, reviews of intra-Chinese mergers have always been approved.

Mergers in the last decade

As you can see from the chart above, there are no outright merger rejections and only a small number of approvals with conditions. Interestingly, the only mergers that have come under scrutiny are mergers without Chinese involvement. Due to the extraterritorial nature of Chinese anti-trust law, even mergers of companies outside China fall under its purview when it involves companies with a substantial amount of business in China. For example, in 2019, the five cases that were approved with conditions were KLA Tencor (US)/ Orbotech (Israel), Cargotec (Finland)/ TTS (Norway), II-VI incorporated (US)/Finisar (US), Zhejiang Huayuan Biotechnology (PR China)/Royal DSM (Dutch), and Nevelis (US)/Aleris (US). In addition, there are cases like Qualcomm (US) / NXP (Dutch), where instead of denying the application, Chinese anti-trust authorities just ran out the time. After two years of waiting for the acquisition of NXP by Qualcomm to be permitted, the companies reached the end of the contractual merger period and were forced to give up. This de facto denial was never recorded as a denial, as the Chinese anti-trust authorities simply did not rule. Due to the small size of China’s anti-trust authority, the country has plausible deniability when it delays ruling on a merger. At face value, China’s perfect record of only approving mergers remains intact, when in reality the merger was forcibly abandoned.

What’s really at stake

Cases such as those mentioned above create the appearance that Chinese anti-trust concerns are not directed at protecting Chinese consumers but protecting Chinese industrial policy. The approval with conditions of the Marubeni (Japan) acquisition of Gavilon (US) and Glencore (Swiss) of Xstrata (Swiss/UK) demonstrates that China’s industrial policy leads anti-trust merger enforcement. In both cases, China was concerned about the supply of vital commodities, copper and grain respectively, and the merger was approved only after significant divestitures that alleviated these concerns.

With this in mind, the acquisition of ARM Technologies (UK) from Softbank (Japan) by Nvidia from the US will be another interesting case. Most casual observers would conclude that Chinese anti-trust authorities would not be involved. Au contraire, mon ami! Almost all smartphone central processors are using ARM instruction sets, and Chinese companies have built their AI and neural processing technology on them. Huawei even went a step further and built its Ascend AI and Kunpeng general purpose processor programs entirely on ARM. The increasing reliance is due both to technical and to political reasons.

President Trump’s moves to use American intellectual property in trade battles with China, as well as restricting their use in military and dual-use applications, has complicated the lives of Chinese high-tech companies and it is likely to continue during President Biden’s administration. As a reaction, China has accelerated its Made in China 2025 project focused on reducing its dependency on foreign technology and products and shifting to non-American suppliers. If the Nvidia acquisition of ARM goes through, another key technology will be more closely under the control of US authorities, giving them another potential tool to assert pressure on China. It would also give Nvidia a significant boost in the AI competitive race that China considers one of its highest priorities. Nividia is a leader in network-based AI and ARM a leader in device-based, also known as edge AI. Combining the two companies makes them a much more formidable competitor, allowing to cross pollinate network AI with edge AI technology and vice-versa. Both companies have substantial business in China and hence fall under Chinese anti-trust laws and are subject to review.

Considering China’s track record, it is almost inevitably going to either block or just refuse to approve the Nvidia/ARM transaction to protect its domestic industry from further US sanctions and restrictions and to prevent a stronger competitor in the AI marketplace. It is more likely that China will simply run out the clock on the merger, while a more aggressive and higher profile move would be an outright denial of the merger. This would send a much stronger signal to the United States than passive aggressive non-approval and would be a harbinger of a more adversarial phase in the relationship between the two countries.

Tipping the scales of innovation

When Nvidia announced that it was in the process of buying Arm from Softbank, many analysts and industry observers were exuberant about how it would transform the semiconductor industry by combining the leading data center Artificial Intelligence (AI) CPU company with the leading device AI processor architecture company. While some see the potential advantages that Nvidia would gain by owning ARM, it is also important to look at the risks that the merger poses for the ecosphere at large and the course of innovation.

An understanding of the particular business model and its interplay highlights the importance of the proposed merger. Nvidia became the industry leader in data center AI almost by accident. Nvidia became the largest graphics provider by combining strong hardware with frequently updated software drivers. Unlike its competitors, Nvidia’s drivers constantly improved not only the newest graphics cards but also past generation graphics cards with new drivers that made the graphics cards faster. This extended the useful life of graphics cards but, more importantly, it also created a superior value proposition and, therefore, customer loyalty. The software also added flexibility as Nvidia realized that the same application that makes graphics processing on PCs efficient and powerful – parallel processing – is also suitable for other heavy computing workloads like bitcoin mining and AI tasks. This opened up a large new market as its competitors could not follow due to the lack of suitable software capabilities. This made Nvidia the market leader in both PC graphics cards and data center AI computation with the same underlying hardware and software. Nvidia further expanded its lead by adding an parallel computing platform and application programming interface (API) to its graphics cards that has laid the foundation for Nvidia’s strong performance and leading market share in AI.

ARM, on the other hand, does not sell hardware or software. Rather, it licenses its ARM intellectual property to chip manufacturers, who then build processors based on the designs. ARM is so successful that virtually all mobile devices use ARM-based CPUs. Apple, which has used ARM-based processors in the iPhone since inception is now also switching their computer processors from Intel to ARM-based internally built CPUs. The ARM processor designs are now so capable and focused on low power usage that they have become a credible threat to Intel, AMD, and Via Technology’s x86-based CPUs. Apple’s move to eliminate x86 architecture from their SKUs is a watershed moment, in that solves a platform development issue by allowing developers to natively design data center apps on their Macs. Consequently, it is only a matter of time before ARM processor designs show up in data centers.

This inevitability highlights one of the major differences between ARM and Nvidia’s business model. ARM makes money by creating processor designs and selling them to as many companies that want to build processors as possible. Nvidia’s business model, on the other hand, is to create its own processor designs, turn them into hardware, and then sell an integrated solution to its customers. It is hard to overstate how diametrically different the business models are and hard to imagine how one could reconcile these two business models in the same company.

Currently, device AI and data center AI are innovating and competing around what kind of tasks are computed and whether the work is done on the device or at the data center or both. This type of innovative competition is the prerequisite for positive long-term outcomes as the marketplace decides what is the best distribution of effort and which technology should win out. With this competition in full swing, it is hard to see how a company CEO can reconcile this battle of the business models within a company. Even more so, the idea that one division of the New Nvidia, ARM, could sell to Nvidia’s competitors, for example, in the datacenter or automotive industry and make them more competitive is just not credible, especially for such a vigorous competitor as Nvidia. It would also not be palatable to shareholders for long. The concept of neutrality that is core to ARM’s business would go straight out of the window. Nvidia wouldn’t even have to be overt about it. The company could tip the scales of innovation towards the core data center AI business by simply underinvesting in the ARM business, or in industries it chooses to deprioritize in favor of the datacenter. It would also be extremely difficult to prove what would be underinvesting when Nvidia simply maintained current R&D spend rather than increasing it, as another owner might do as they see the AI business as a significant growth opportunity rather than a threat as Nvidia might see it.

It is hard to overestimate the importance of ARM to mobile devices and increasingly to general purpose computing – with more than 130 billion processors made as of the end of 2019. If ARM is somehow impeded from freely innovating as it has, the pace of global innovation could very well slow down. The insidious thing about such an innovative slow down would be that it would be hard to quantify and impossible to rectify.

The proposed acquisition of ARM by Nvidia also comes at a time of heightened anti-trust activity. Attorney Generals of several states have accused Facebook of predatory conduct. New York Attorney General Letitia James said that Facebook used its market position “to crush smaller rivals and snuff out competition, all at the expense of everyday users.” The type of anti-competitive conduct that was cited as basis for the anti-trust lawsuit against Facebook was also that of predatory acquisitions to lessen the threat of competitive pressure by innovative companies that might become a threat to the core business of Facebook.

The parallels are eerie and plain to see. The acquisition of ARM by Nvidia is all too similar to Facebook’s acquisitions of Instagram and WhatsApp in that both allow the purchasing entity to hedge their growth strategy regardless of customer preferences while potentially stifling innovation. And while Facebook was in the driver’s seat, it could take advantage of customer preferences. Whereas in some countries and customer segments the core Facebook brand is seen as uncool and old, Instagram is seen as novel and different than Facebook. From Facebook’s perspective, the strategy keeps the customer in-house.

The new focus by both States and the federal government, Republicans and Democrats alike, on potentially innovation-inhibiting acquisitions, highlighted by their lawsuits looking at past acquisitions as in Facebook’s and Google’s case, make it inevitable that new mergers will receive the same scrutiny. It is likely that regulators will come to the conclusion that the proposed acquisition of ARM by Nvidia looks and feels like an act that is meant to take control of the engine that fuels the most credible competitors to Nvidia’s core business just as it and its customers expands into the AI segment and are becoming likely threats to Nvidia. In a different time, regardless of administration, this merger would have been waved through, but it would be surprising if that would be the case in 2021 or 2022.

Tipping the scales of innovation

When Nvidia announced that it was in the process of buying Arm from Softbank, many analysts and industry observers were exuberant about how it would transform the semiconductor industry by combining the leading data center Artificial Intelligence (AI) CPU company with the leading device AI processor architecture company. While some see the potential advantages that Nvidia would gain by owning ARM, it is also important to look at the risks that the merger poses for the ecosphere at large and the course of innovation.

An understanding of the particular business model and its interplay highlights the importance of the proposed merger. Nvidia became the industry leader in data center AI almost by accident. Nvidia became the largest graphics provider by combining strong hardware with frequently updated software drivers. Unlike its competitors, Nvidia’s drivers constantly improved not only the newest graphics cards but also past generation graphics cards with new drivers that made the graphics cards faster. This extended the useful life of graphics cards but, more importantly, it also created a superior value proposition and, therefore, customer loyalty. The software also added flexibility as Nvidia realized that the same application that makes graphics processing on PCs efficient and powerful – parallel processing – is also suitable for other heavy computing workloads like bitcoin mining and AI tasks. This opened up a large new market as its competitors could not follow due to the lack of suitable software capabilities. This made Nvidia the market leader in both PC graphics cards and data center AI computation with the same underlying hardware and software. Nvidia further expanded its lead by adding an parallel computing platform and application programming interface (API) to its graphics cards that has laid the foundation for Nvidia’s strong performance and leading market share in AI.

ARM, on the other hand, does not sell hardware or software. Rather, it licenses its ARM intellectual property to chip manufacturers, who then build processors based on the designs. ARM is so successful that virtually all mobile devices use ARM-based CPUs. Apple, which has used ARM-based processors in the iPhone since inception is now also switching their computer processors from Intel to ARM-based internally built CPUs. The ARM processor designs are now so capable and focused on low power usage that they have become a credible threat to Intel, AMD, and Via Technology’s x86-based CPUs. Apple’s move to eliminate x86 architecture from their SKUs is a watershed moment, in that solves a platform development issue by allowing developers to natively design data center apps on their Macs. Consequently, it is only a matter of time before ARM processor designs show up in data centers.

This inevitability highlights one of the major differences between ARM and Nvidia’s business model. ARM makes money by creating processor designs and selling them to as many companies that want to build processors as possible. Nvidia’s business model, on the other hand, is to create its own processor designs, turn them into hardware, and then sell an integrated solution to its customers. It is hard to overstate how diametrically different the business models are and hard to imagine how one could reconcile these two business models in the same company.

Currently, device AI and data center AI are innovating and competing around what kind of tasks are computed and whether the work is done on the device or at the data center or both. This type of innovative competition is the prerequisite for positive long-term outcomes as the marketplace decides what is the best distribution of effort and which technology should win out. With this competition in full swing, it is hard to see how a company CEO can reconcile this battle of the business models within a company. Even more so, the idea that one division of the New Nvidia, ARM, could sell to Nvidia’s competitors, for example, in the datacenter or automotive industry and make them more competitive is just not credible, especially for such a vigorous competitor as Nvidia. It would also not be palatable to shareholders for long. The concept of neutrality that is core to ARM’s business would go straight out of the window. Nvidia wouldn’t even have to be overt about it. The company could tip the scales of innovation towards the core data center AI business by simply underinvesting in the ARM business, or in industries it chooses to deprioritize in favor of the datacenter. It would also be extremely difficult to prove what would be underinvesting when Nvidia simply maintained current R&D spend rather than increasing it, as another owner might do as they see the AI business as a significant growth opportunity rather than a threat as Nvidia might see it.

It is hard to overestimate the importance of ARM to mobile devices and increasingly to general purpose computing – with more than 130 billion processors made as of the end of 2019. If ARM is somehow impeded from freely innovating as it has, the pace of global innovation could very well slow down. The insidious thing about such an innovative slow down would be that it would be hard to quantify and impossible to rectify.

The proposed acquisition of ARM by Nvidia also comes at a time of heightened anti-trust activity. Attorney Generals of several states have accused Facebook of predatory conduct. New York Attorney General Letitia James said that Facebook used its market position “to crush smaller rivals and snuff out competition, all at the expense of everyday users.” The type of anti-competitive conduct that was cited as basis for the anti-trust lawsuit against Facebook was also that of predatory acquisitions to lessen the threat of competitive pressure by innovative companies that might become a threat to the core business of Facebook.

The parallels are eerie and plain to see. The acquisition of ARM by Nvidia is all too similar to Facebook’s acquisitions of Instagram and WhatsApp in that both allow the purchasing entity to hedge their growth strategy regardless of customer preferences while potentially stifling innovation. And while Facebook was in the driver’s seat, it could take advantage of customer preferences. Whereas in some countries and customer segments the core Facebook brand is seen as uncool and old, Instagram is seen as novel and different than Facebook. From Facebook’s perspective, the strategy keeps the customer in-house.

The new focus by both States and the federal government, Republicans and Democrats alike, on potentially innovation-inhibiting acquisitions, highlighted by their lawsuits looking at past acquisitions as in Facebook’s and Google’s case, make it inevitable that new mergers will receive the same scrutiny. It is likely that regulators will come to the conclusion that the proposed acquisition of ARM by Nvidia looks and feels like an act that is meant to take control of the engine that fuels the most credible competitors to Nvidia’s core business just as it and its customers expands into the AI segment and are becoming likely threats to Nvidia. In a different time, regardless of administration, this merger would have been waved through, but it would be surprising if that would be the case in 2021 or 2022.

C Band Auction Takes Off

A week into the C-Band auction, all signs point to an intense struggle for spectrum among the auction participants. With 5G deployments in full-swing, the 280Mhz of mid-band licenses being offered sit directly in the ‘goldilocks zone’ that straddles attractive propagation characteristics -good in-building penetration and range- and 20Mhz blocks large enough to deploy significant capacity and speed for 5G.

For bidders looking to use spectrum quickly, the 100MHz that makes up A block for the auction is scheduled to be cleared as early as 2021 while B and C block licenses may not be available until 2023. As such, A block licenses for 46 of the top 50 markets can be bid on separately and will likely come at a premium compared to B and C block licenses in the same market.

While we won’t know who bid on which markets until the auction is over, for each round the FCC reports the demand for licenses versus those available via the public reporting system. For each round, in every market where demand exceeds supply, the price increases by 10% for the next round. Ten percent may not sound like much initially, but exponential growth soon catches up and markets with intense bidding quickly get expensive. For example, after ten rounds where demand exceeds supply the price more than doubles, after twenty rounds it increases by six-fold and after fifty rounds the price has increased to 100x the original cost.

Price escalation invariably forces decisions on even the most well-heeled bidders, but we have not yet reached that point with the C-Band auction. So far the volume of markets with bids over supply has INCREASED, not decreased, resulting in over $10.5B gross proceeds across the 411 markets offered through round 20.

What is interesting about the C-Band auction so far is that the bidding volume for smaller markets that typically happens later in the auction has heated up early. During the first ten rounds demand for roughly 2/3rds of markets exceeded supply and therefore increased in price by 10% every round. Starting in round 12 bidding volume increased.  This increased volume catapulted markets with price increases from 68% of all markets to 81% of all markets and continued to build to over 90% of all markets by round 17.

Digging a bit deeper, the jump in markets with more demand than supply that began in round 11 was driven by an increase in interest in smaller population markets while prices for larger markets were still not settled. Through round 10 only about half of markets lower population tier markets ranked 100-400 had more bids than supply, but by round 15 over 86% of markets ranked 100-400 by population had more bids than supply.

But what about those A sub block licenses that could help the winner race to deploying mid-band 5G? There are only 5 sub blocks available per market, but bidding for many of the top markets through 20 rounds still shows bids in the double digits in excess of supply. In fact, demand for A block markets is still so strong that in the first 20 rounds every A sub block in the 46 markets increased in price every round. That adds up to a 612% increase over 20 rounds and over $3.5B of the total gross proceeds of $10.5B across the entire auction.

Bidders seem to be slowly adjusting to the expectation that they may not win the entire A block. In round 18 demand for the top 39 markets dropped by 2 to 3 units across all markets, likely indicating a coordinated pull-back by one bidder. While demand has fallen across the top markets, none of the A block markets have reached bidding equilibrium where supply equals demand and price increases cease. At current pace the licenses in A sub block markets would be collectively worth over $5B in just 5 more rounds and over $10B by round 32.

Within A sub block, which markets are most popular among bidders? As always, the top 10 markets have among the highest demand, but the market with the most demand in excess of supply so far is Salt Lake City, the 27th largest market in the auction, with 4 bidders for every available block (5 blocks available, 15 in excess of supply).  A cluster of markets follow Salt Lake City for second most activity: Chicago, Dallas, Miami, Houston, Orlando, Las Vegas, Kansas City, Austin and Milwaukee all have 12 bids over supply (total bids of 17 each).

So what can we learn from the first week of the C-Band auction? At $10.5B through round 20, the levels of interest in mid-band 5G spectrum are sky high. Verizon and AT&T’s well understood need for mid-band 5G spectrum is surely playing a role in the bidding intensity, but bidding volumes suggest there are also other players willing to throw their hat in the ring, particularly for the A block which will be available soon.  Regardless of who wins, it’s likely we’ll see consumers enjoying the benefits of C-Band 5G sooner rather than later.

Posted in 5G

Attempts to Close the Digital Divide — What has worked and what hasn’t?

Over the past 15 years, there have been several government initiatives to expand the adoption of broadband in the United States. At the same time, industry has been busily focused on extending the reach and capacity of both fixed and mobile broadband networks.  Yet, a digital divide still exists.  Why?  Let’s review the history here.

Since xxx, the cable and telecom industry have successfully provided broadband connectivity to more than 110.8 million households, adding about 2.4 million households per year. Gigabit speeds are now available to 85% of households. The broadband companies expand their  footprint in an economically responsible way as they are accountable to their shareholders. Regardless, this leaves us with 17.7 million households left to cover. With the number of households increasing by roughly one million per year, at the current pace this would take us around 13 years. The current pandemic, with its work and study from home demands, shows us that we do not have 13 years to close this digital divide. In order to make the best possible decision on how to solve the problem, we should look at what has and has not worked in the past.

One of the most hotly debated solutions being proposed to close the digital divide is to have the government support municipal broadband, a catch-all term  for providers of broadband that includes telephone and electric cooperatives. The general caveat of government entering what is a private business market is what economists call crowding out. A for-profit company typically has no chance of competing against a government entity. The latter does not have a profit goal and can provide service at a loss for an infinite period of time, as it has access to government revenue in the form of taxes or bonds to cover the losses. At the same time, the government has a poor record of innovating adjustments to a rapidly changing technological environment. The pro-municipal broadband argument holds that if for-profit companies are not offering services in a particular geographic location, they cannot be crowded out.

Electric cooperatives were founded in the 1930s to solve the 20th century equivalent of the broadband problem, and the solution is instructive for our current situation. The Institute of Local Self-Reliance, an organization in favor dispersing economic power and ownership, identified eight municipal networks that failed in the United States. The common thread of failure was inexperience in running customer-facing organizations as a neophytes struggled to learn a new skill set. This highlights the gap between running a relatively small number of government services and running much larger and more technically complicated broadband network and the problems recruiting the people with the right existing skill sets.

The most likely scenario for success is the addition of broadband service to an existing electric or telephone cooperative’s portfolio. In this case, an entity with experience in running a customer-facing operation and network for decades simply expands its service. The cooperatives are already serving mostly rural customers and do not crowd out for-profit cable and telecom providers. The FCC has recognized this and has explicitly included electric cooperatives in the Connect America Fund II initiative (which we will discuss later)

Source: ILSR

As we can see from the map above, the opportunity for rural broadband coverage from cooperatives is significant as rural areas often in the South and the Great Plains have low population density. Perhaps engaging both electric and telephone cooperatives in rural areas is an effective way to close the digital divide in some areas.  These could take the form of public private partnerships and potentially avoid the pitfalls of muni-broadband.

Muni-Broadband has failed for different reasons.  Research shows that most of the failed entities are urban, often engaging in direct competition with incumbent providers. Examples such as Monticello, MN, Salisbury NC, and Tacoma, WA come to mind. In other cases, the municipal broadband networks such as in Muscatine, IA, Utopia, UT had to be bailed out by taxpayers or the electric cooperative because it could not stay afloat. We also have Provo, UT and Groton, CT, which ended up selling to private companies at a great loss to tax payers,  and Burlington, VT where the lack of oversight and cover-up of incompetence lead to failure to Bristol, VA where corruption meant the end of the network.

In 2010, Google announced that it would start providing broadband fiber connectivity in a number of cities to between 50,000 and 500,000 households. Cleverly, Google put out a request for information asking municipalities to apply to have Google offer fiber in their city or town. This reversed the traditional relationship between provider and municipality. Traditionally, the provider asks the municipality if it can provide service in the area. The municipality responds with what they ask for in terms of fees and extra services. Ever wondered why so many pools, parks, and sports areas are sponsored by telecom and cable companies? It was one of the demands of the city in order to allow the service provider to offer service in the town. By inverting the relationship and asking towns to apply to Google for consideration, Google shifted the power relationship, and was able to receive such favorable terms that telecom and cable providers went to cities and demanded the same terms and conditions that Google got, but they were never able to get by themselves. Under equal treatment rules, these cities had to extend the favorable Google terms and conditions to every other provider. Kansas City was the first city Google Fiber launched followed by Austin, Provo, and fifteen more cities. The Provo network was a defunct municipal network that was built for $39 million and then sold to Google for one dollar. After realizing the high cost to build a fiber network and the long delay of a payback to themselves, Google first halted further network expansions after it had deployed in five cities, and then switched to a private public partnership (PPP) model where the municipality builds the network and incurs the cost and Google sells the service. In addition, Google made an acquisition in the fixed wireless broadband space to also provide broadband wirelessly. This has slowed down the expansion significantly, but the scope has increased beyond what can be called a trial – as Google likes to call every endeavor they get into – as Google now covers 18 cities.

The 19th market for Google Fiber will be West Des Moines, Iowa. Similarly to Huntsville, Alabama, the city will build a fiber network for $39 million, in exchange Google will pay the city $2.25 for each household that connects to the network. Over the 20-year agreement, Google will pay at least $4.5 million to the city. The project will be completed by the end of 2023. By entering PPPs, Google gets the various cities to pay for the expensive built out and make money by providing the service. Google’s experience highlights that even one of the largest companies in the world does not have the focus, wherewithal and patience to actually build out a nationwide system, but relies on the government to pay for the physical buildout.

When the government helps in areas with adverse circumstances, either through low population density or low income, a business case can be made that allows the deployment of broadband services. The societal good that comes from broadband in the form of access to online learning for students, job resources for adults and an overall increase in computer skills will create greater long-term benefits than long-term costs.

On the government side of the equation, the FCC has been very focused on allocating monies (and spectrum) for broadband.  The FCC’s Connect America Fund (CAF) was born out of the National Broadband Plan from 2010 aiming to broaden the availability of broadband. Now in its second iteration, CAF II, the fund is a reverse auction subsidy for broadband providers, satellite companies and electric cooperatives to provide coverage in underserved areas.

At the end of the CAF II auction, $1.49 billion of subsidies over ten years were awarded to provide broadband and voice services to 700,000 locations in 45 states highlighted in the map above. Prospective providers successively bid on who would cover the underserved market for less and less subsidy. This ensures that the area is covered for the least cost to taxpayers.

The CAF II and other government programs are increasingly closing the gap more than $20.4 billion over the next 10 years. The US Department of Agriculture has been one of the longest standing sources of support to bring broadband to rural America with $600 million per year from the ReConnect program. In October 2020, the FCC will launch the auction for the Rural Development Opportunity Fund (RDOF), a 10-year $20.4 billion program to bring broadband to areas that do not have broadband defined at 25 Mbit/ss download speed and 3 Mbit/s upload speed.

The biggest controversy around CAF II is the mapping issues. In a nutshell, if only one location in a census track has access to broadband, it is assumed that all locations have broadband. This is in a significant number of cases is not true and some locations have access when others do not. This is especially true in urban areas where we still have some high population pockets that lack access to broadband. Parts of the FCC Commission wanted to delay additional projects until the mapping problem was solved, whereas the majority voted to release the fund and work at the problem concurrently as the underserved markets are underserved even with a tighter requirement.  While being criticized for its complexity and lack of clarity of how overachievement of the target goals gets recognized and impacts winning the subsidy, the program has been overall lauded a success.

When we look at what has worked and what hasn’t worked, it becomes apparent that the for-profit system has worked for 90% of Americans to have access to at least one broadband provider. The problem becomes the hard to reach, both in urban and rural environments. No matter how we look at the issue, it becomes clear that government and cooperatives plays a role to alleviate the problem as we need to fix a societal problem.

  • Since Silicon Valley giants like Google with almost infinite resources have balked at building out fiber in many urban areas and are relying on cooperatives or municipalities to foot the bill, the economics of building out hard to reach parts of the United States are even more difficult.
  • The broadband industry is investing between $70 billion and $80 billion per year to connect Americans, the wireless industry investing another $25 to $30 billion just shows that the industry can’t shoulder it alone.
  • Electric cooperatives as non-profits have a longer time horizon, which makes their investment in underserved rural areas easier, as they have an already established customer relationship with the prospective customers and an established connection to the location.
  • The CAF and other funds have worked by providing the minimal subsidy to cover underserved markets, but we just need more. Even though some have complained that it provides for only one choice ignoring that 85% of households have a choice of two wireline providers and 99% of Americans can chose between at least three mobile service providers. The counter argument for very rural parts of the United States is that one choice in an economically unprofitable market is better than no choice. Also, one has to consider that requiring every location to have two choices roughly doubles the cost of deployment and half of the infrastructure being idle.
  • The program will work even better with more accurate mapping of underserved areas and through that broaden its scope from mainly rural to also urban areas and become location agnostic. If a follow-up program not only wants to bring access but also competition to an underserved area, the government would have to not only double but probably quadruple if not quintuple the subsidy due to the doubling the cost of deployment while halving the expected revenue.

The consequences of not building out areas that do not have broadband access today – regardless if urban or rural – perpetuates the current trends where we have parts of society that cannot participate in the economic and social life of our country. As 2020 has shown us, broadband internet has become the lifeline of businesses and video conferencing has become a necessity for employees to work remotely. This means that many better paid jobs are closed to people depending on where they live regardless if it is an area without broadband in the urban or rural place. Unsolved this will force a further depopulation of rural America, a flight from unserved urban areas as critical employees and business owners are effectively prevented from earning a living there. At least as important is the equal access to education. Student homework and tests cannot be counted for grading unless every student in the class is able to participate. Without broadband access, not only children who live in these unserved areas are affected but also their classmates who have access.

European Lessons on Broadbroad — A Look at Germany

For a country that is known for being as efficient, organized, and technologically advanced as Germany, its state of mobile networks constitutes a rare black mark. Germany is the third largest economy in the world with 82 million inhabitants (double that of California in half of the area) with a highly efficient and advanced high-tech manufacturing industry. Where it is struggling is with the digitalization of the economy and both fixed and wireless networks. Germany’s wireless networks are ranked 32nd out of 34 countries, ahead only of Ireland and Belarus. Yet no other European country has a larger share of 3G users than Germany, and it is not uncommon to fall back to EDGE networks both in urban and rural areas. The reasons for this atypical performance lie with the actions of regulators and companies alike.

In 2010, Germany auctioned 4G licenses with the requirement that within 5 years 97% of the population would be covered by 4G. However, even by 2020, every operator has failed to meet the 2015 buildout requirement. How could this happen in a country that prides itself on following the rules?

 RequirementTelefónicaTelekomVodafone
Baden-Württemberg97%82,7%96,01%97,7%
Bayern97%80,7%97,58%98,3%
Berlin97%100%99,96%100%
Brandenburg97%62,6%97,5%99%
Bremen97%99,9%99,99%100%
Hamburg97%100%99,99%100%
Hessen97%76,7%98,39%97,4%
Mecklenburg-Vorpommern97%72,9%97,52%99,3%
Niedersachsen97%85,9%98,6%99%
Nordrhein-Westfalen97%94,3%99,28%99,4%
Rheinland-Pfalz97%65,4%96,48%97%
Saarland97%78,9%95,43%97,9%
Sachsen97%80,9%98,12%99%
Sachsen-Anhalt97%80,6%98,49%98,7%
Schleswig-Holstein97%90,6%98,53%99,9%
Thüringen97%73,2%97%98,1%
Nationwide98%84,3%98,1%98,6%
Interstates100%77,9%97,6%96%
Rail100%80,3%96,4%95%***

Souce: Bundesnetzagentur, May 2020

With every new generation, German mobile operators suffer from low technology adoption because they use the same playbook over and over again (3G, 4G and now 5G), resulting in the same poor outcome. Wireless licenses in Germany and most of Europe are tied to a specific technology, whereas US licenses can be used with any technology that allows a more efficient transition from one generation to the next. Regardless, German operators rightfully realize the high value of new spectrum for next generation technology and bid more money per capita for next generational licenses than anywhere else in Europe. As a result of the significant investment in licenses, German operators position the next generation product as a premium product with a significant price premium. For this reason, consumers and businesses are reluctant to adopt next generation service plans and devices, leading to suppressed next generation revenues and profits. These low profits are then used as a justification limit capital investment in next generation technologies. Consequently, German wireless networks cover less area than they can and should. This self-fulfilling prophesy is now in its third iteration. We have seen it in 3G, 4G, and now 5G in Germany.

US carriers start from the same point of recognizing the value of next generation technology and spectrum, and US spectrum auctions have yielded the highest values globally. Unlike Germany, US mobile operators make the new technology available for the same price point as the last generation technology, creating greater profitability through a significantly lower cost structure given that next generation technology typically lowers the cost per gigabyte by 90% over the previous generation. As a result, US mobile operators see a rapid shift from the old generation to the next generation network usage as customers upgrade their devices to be able to take advantage of the new networks. By holding the price points steady for next generation networks with their faster speeds, US operators are under less price pressure than European operators, allowing operators to invest heavily in their networks and differentiate on coverage. As a result, the US ranks fifth in the world for 4G availability, behind South Korea, Japan, Norway and Hong Kong, which make up a combined 9.3% of the areas of the United States. As a result, everyone wins in the US approach: customers have faster access to next generation technology, and operators make a higher profit.

Germany’s cost problem is compounded by a legal and regulatory regime that does not favor the building of cell sites similar to Section 332 of the Telecom Act. German building permits are notoriously lengthy endeavors that take a long time. Frequent lawsuits against many cell sites lead to drawn-out legal reviews which slows down network buildout. All of these policies are not friendly for capital investment in wireless networks.

The problem of how to cover thinly populated rural areas in Germany persists. Mobile operators complain that it is unprofitable to cover many rural areas. During the 2018 Mobilfunkgipfel (Mobile Summit) between the German government and mobile operators, the government committed to share part of the cost of covering rural parts of Germany.

Coverage issues in rural parts of a country are not unique. Germany’s neighbor France – roughly the size of Texas – has tackled the issue in three different ways. For the 2G rollout mobile operators, the central government, and the departments (provinces) with coverage gaps in the rural parts of France split the cost three ways between the parties. In 2015, the French government set aside $1 billion to close the 3G coverage gaps. In 2018, the French government came to an agreement with the four incumbent operators to extend the license term in exchange for closing coverage gaps and to install jointly more than 5,000 masts and antennas.

There are four key lessons that we can take away from the German and French examples:

  1. The business model matters. American operators are providing world class service, especially considering the size of the country. The US operator model of capturing profit through cost reduction rather than price increases is the superior model. It results in faster and higher adoption of next generation technology and greater capital investment. The one US carrier who tried to charge a premium for 5G, Verizon, has two European executives at the helm. Customer pressure quickly forced Verizon to abandon its European model of a price premium and revert back to the US model.
  2. A mobile-friendly regulatory regime that enables the rapid building of new cell sites makes a positive difference. It is a no-brainer that when it is difficult for operators build new sites, coverage suffers.
  3. Even medium-size economically prosperous countries like France and Germany have similar problems to economically build out mobile networks. While it is more cost effective to build out rural areas with wireless rather than fixed technology, the business case is far from a foregone conclusion.
  4. The comparison between the US and more tightly regulated countries shows that incentives and support for wireless networks without red tape and strings attached are creating better results.

Mobile is Colorblind

Stay-at-home orders, school closings, and social distancing have raised the issue of the digital divide in the United States. While the availability and affordability of connectivity is important, owning a device to access the internet is equally important. Broadband without a device is even less useful than a device without a network. A government program that tries to close to digital divide needs to pay attention to where a digital device gap does and does not exist.

Nielsen just published its Total Audience Report 2020 which also provides insights on device ownership by race. This allows us to glean important insights from the data and how it should inform policy makers, as there are similarities and significant differences when it comes to device ownership.

 TotalBlackHispanicAsianWhite
 Mar’19Mar’20Mar’19Mar’20Mar’19Mar’20Mar’19Mar’20Mar’19Mar’20
DVD/Blue-Ray Player62%57%54%47%52%45%47%44%65%60%
DVR55%52%52%49%49%45%44%42%56%53%
Smart TV45%52%42%51%54%61%57%65%45%51%
Internet Connected Device40%47%41%48%42%48%58%62%39%47%
Game Console43%40%43%42%54%52%50%46%41%39%
Computer79%78%68%68%72%72%89%89%82%80%
Smartphone92%93%93%95%97%97%97%97%91%93%
Tablet64%63%57%55%63%61%74%72%65%64%
Internet-enabled TV-Connected Devices71%76%69%75%78%82%86%90%70%75%
Subscription video on-demand69%74%63%70%73%77%81%84%70%74%

Source: Nielsen Total Audience Report 2020 – https://www.nielsen.com/us/en/insights/article/2020/marketers-its-time-to-engage-asian-american-consumers/

The most owned device in the United States is the smartphone. Ninety-three percent of all Americans own one. Contrary to common stereotypes, there is no significant difference when it comes to smartphone ownership – mobile is colorblind. White Americans are actually the laggards with 93% ownership when it comes to smartphone ownership as Blacks (95%), Hispanics (97%) and Asians (97%) are all reporting higher smartphone ownership. The high ownership is driven by the significant utility of smartphones – the Swiss Army Knives of the connected world. It fits in your pocket, allows people to talk, text and use the internet, and is readily financed through mobile operators or device manufacturers, bringing down the cost of the device to a manageable monthly installment.

Text Box: We need to make the ownership of computers and tablets as color blind as the ownership of smartphonesWhere we are seeing substantial differences is in computer and tablet ownership. As of March 2020, an average of 78% of Americans owned a computer. Eighty-nine percent of Asian Americans own a computer, followed by 80% of whites, but only 68% of Blacks. Similarly, 63% of Americans own a tablet. Again, Asian Americans have the highest device ownership with 72%, followed by white Americans with 64%, but only 55% of Black Americans own a tablet. Tablet and Computers are essential to closing the homework gap and, even more importantly, the testing gap. Unless every child and student has access and is able to participate in online learning and testing, the progress and grades for every child in the class cannot be counted in their official school record. This makes universal access critical for all, regardless of income and access. We need to make the ownership of computers and tablets as color blind as the ownership of smartphones.

While smartphone ownership has increased from March 2019 to March 2020, computer and tablet ownership has declined, together with ownership of other, increasingly obsolete technology hardware like Blue-Ray DVD Players, DVRs, and game consoles.

Blue-Ray DVD players and VCRs have been supplanted by video-on-demand services, which have seen a significant increase in adoption. Game consoles have also suffered from the shift to mobile gaming on smartphones and the lack of the introduction of new consoles. Both the Microsoft Xbox and the Sony Playstation 4 are seven years old and technologically obsolete, with both devices receiving a next-generation model at the end of 2020. Computers, including laptops, as well as tablets have also been struggling as they have been lacking the sorts of new features that have consumers chomping at their bits to buy a new one.

Text Box: If the government is serious in bringing the high-tech device supply chain back to the United States, it can require that the devices are being manufactured in the United States and have a proportion of the component come from the United States as well so that the stimulus money actually stimulates the US economy.Any stimulus plan that is genuinely interested in closing the digital divide and the resulting homework and testing gaps needs to address the device gap as well. Broadband networks without the right devices are like one-handed clapping. To improve learning and to raise and broaden the standard of digital economy skills, every student should have a device that can access broadband networks. If a student’s family cannot afford such a device, the government should provide aid to acquire one. If the government is serious in bringing the high-tech device supply chain back to the United States, it can require that the devices are being manufactured in the United States and have a proportion of the component come from the United States as well so that the stimulus money actually stimulates the US economy.

The proliferation of 5G launches offers a significant opportunity for the government to stimulate innovation akin to President Franklin D. Roosevelt’s Arsenal of Freedom initiative or the space program’s myriad of spin-off innovations that have made our lives better.

5G-capable devices should be at the core of such a program with both x86 and ARM processors. American companies like Intel, AMD, and Qualcomm would provide the technology that is at the heart of these devices – the processor – and sell them to any device manufacturer. Apple would build ARM processors for its own devices. Such a device stimulus plan could be the important accelerant for ARM processors in computers and laptops. ARM processors are at the heart of smartphone and tablets as ARM processors are very energy and heat efficient, but they only slowly make an entry into the computer world as their compute power is approaching and in some cases overtaking x86 processors. Qualcomm together with Microsoft has launched an ARM laptop and Apple is rumored to use its A-series processors in upcoming MacBooks. China’s Huawei has designed its entire AI, called Ascend, and a general computer program called Kunpeng on ARM technology and plans to build an entire ecosphere around it with a $1.5 billion investment over the next five years. The United States should at least be able to match a similar kind of investment to make sure it does not fall behind if there is a significant shift to ARM computing.

With the country on the brink of a slow and painful recovery from the pandemic, the time is now for Congress to direct money where it will have the biggest economic and societal impact.  Right now,  closing the digital divide and the homework gaps is precisely such an opportunity.  Enabling more Americans to afford an Internet-capable device is critical to the country’s recovery, and one of the fastest ways to give a voice to more black and brown Americans who are otherwise being left out of the country’s economic and other successes.

Broadband 2020: how the pandemic changed usage and priorities

A new report called “Broadband 2020” by Recon Analytics shows that over 40% of employees in the United States are able to telecommute. The Department of Labor’s Bureau of Labor Statistics defines the professional workforce as all workers in the “management, professional, and related occupations” colloquially known as white collar workers, which make up 41.2% of all jobs in America. This means that basically every white collar worker is able to telecommute. This highlights the dramatic change that the American workplace has undergone during the pandemic.

The pandemic also has the potential to halt or even reverse the decades-long migration of Americans from rural to urban settings. A slight majority (50.9%) of Americans that can telecommute are contemplating moving to a smaller city or town as the pandemic has prompted many Americans to reevaluate their priorities and living conditions.

What is surprising is that even 31% of Americans that cannot telecommute are considering moving to a smaller city or town. It shows that the luster of metropolitan areas has been waning.

But not all new places are equal, so we asked what factors would stop people from moving to a new place. The results were equal parts predictable and surprising:

More than a third of Americans do not have any reasons that would prevent them from moving to a different place. Where it gets interesting is the reasons why people would not move. The number one reason for not moving to a different town or village is a pay cut – 31.6% of respondents. Companies like Facebook have announced that employees who work from home from lower-cost areas – and everything is lower cost than Silicon Valley – would receive a pay cut. A move that ties compensation to location rather than contribution would prevent a significant number of employees from moving away from Silicon Valley, which already is experiencing a severe housing shortage and overloaded roads. Facebook’s reasoning also allows a glimpse at its compensation philosophy, which seems to focus more on competitive factors than what is good for the community or the employee. Almost as many, 31%, would not move to a town or village without broadband, which is just ahead of access of quality health care with 30.1% – and that in the midst of a pandemic. One has to recognize the magnitude of this finding: Availability of broadband, access to quality healthcare, and a pay cut are equally important in the mind of Americans during a pandemic and recession.

At 36.3%, the 45-54 age segment considers the lack of broadband to be the most significant barrier to moving, followed by the 25-34 age segment with 35.8%. More than a quarter of seniors (26.1%) will not move to a new location if broadband isn’t readily available.

Broadband is even more important than politics. While 22.5% of Americans would not move to an area with what they consider an incompatible political climate, which is significantly less than the importance of broadband. The  45 to 56 age segment is most focused on  politics with over 30.9% citing an unwillingness to move due to an incompatible political climate. The next most polarized age segment is those over the age of 65, where 22.1% mention an incompatible political climate prevents them from moving.

The lack of a nearby airport or a buzzing nightlife was the least important in people’s minds. Only 13.7% of respondents thought that not having an airport within a 50-mile radius would prevent them from moving there. A buzzing nightlife or restaurant scene is even less on people’s minds. Only 9.6% of 18 to 24-year-olds find it an obstacle to move, whereas 13.1% of the 25 to 34 age segment needs a buzzing nightlife and restaurant scene.

We asked people what they considered broadband. The median American considers 50 Mbit/s download and 5 Mbit/s upload as broadband. The people’s expectations are leading the FCC’s definition of broadband which currently sits at 25 Mbit/s download and 3 Mbit/s upload.

The reason for this becomes apparent when we look at the use cases. In our survey we looked at several use cases, but the prevalence of video conferencing has driven bandwidth requirements upwards, especially on the upload side. A HD video stream requires a minimum of 5 Mbit/s upload and download per stream. With more than 25% of Americans now frequently using video conferencing for work and another 21% using sometimes for work the bar has effectively been raised.

While the lack of widely available broadband is a significant hurdle for cities and towns to attract new residents, it is almost outright disqualifying for housing options: 77.5% of respondents would not move to a place, like a house or apartment, that does not have broadband. This makes the availability of broadband one of the key selection criteria when choosing a new residence. When almost half of the population has to be sometimes or frequently on video conferencing, having broadband becomes a job requirement. The pandemic, for the good and bad, has made our homes places of work with the IT and connectivity needs that were traditionally reserved for places of work. These are just some of the highlights of the new Recon Analytics Report “Broadband 2020.”

The results of the report are reinforcing the data from FCC’s 2020 Broadband Deployment Report which represents the most recent government data on the topic and the progress the industry has made from 2014 to 2018.

As of 2018, 94.4% of the Americans have access to broadband as the FCC defines it, 25 Mbits download, 3 Mbits upload (25/3). In urban areas, it is even 98.5%, but as we look towards rural areas and tribal lands, the availability is significantly less. In rural areas 77.7% of Americans and in tribal lands, 72.3% of Americans have access to 25/3 broadband. In higher tiers, access in urban areas drops only slightly, but much more significantly in rural areas and in tribal lands. At the 250/25 Mbps tier, 94% of Americans in urban areas have access, a drop of 4.5% from the 25/3 level. In rural areas,  51.6% of American have access to 250/25, which is 26.1% less than 25/3. In tribal lands, 45.5% have access to 250/25 which is 26.8% less than 25/3.

The numbers make it clear that there is still more than enough to do in urban, rural and tribal areas to provide connectivity for essential tasks. As it looks increasingly unlikely that children in every school district will be able to go back to school, we need to ensure that every child in the United States can access the internet to be able to participate in school and classroom work. If only one child cannot participate, the progress and grades for the entire class are not counted. While fixed broadband deployment is a time-consuming endeavor, mobile broadband can and should close the homework gap. T-Mobile has announced that as part of its merger commitments it will deliver mobile broadband to 10 million households we have only a few weeks to turn this promise into a meaningful difference as the new school year starts. The other mobile operators, in conjunction with the FCC and federal funding, should seize the opportunity and close the homework gap as quickly as possible.

In order to recover as quickly as possible from the current economic slump, we should put money where it has the biggest impact. Different technologies are able to achieve the same goals but have strengths and weakness in different areas. This means that any funding has to be technologically agnostic and look at the performance characteristics. The United States has wisely always used performance characteristics such as download and upload speed as well as latency as its selection criteria rather than being tied to a technology regardless if it is fiber, hybrid fiber coax, VDSL, satellite or whatever generation of wireless standards.

If you would like to buy the underlying report, please give us a call at 617.823.3363

The 4G Decade – Quantifying the Benefits

4G wireless networks powered remarkable economic growth and transformed the way Americans live and work, according to a new report by Recon Analytics and CTIA, the wireless industry association. The study shows the powerful impact of wireless on our economy as American providers begin to rollout next-generation 5G networks, which will create a new 5G economy over the next 10 years.

The report’s key findings include that nearly 10% of the GDP increase of the entire U.S. economy from 2011 to 2019 was due to the growth of the U.S. wireless industry, and that U.S. 4G networks support 20 million jobs, drove nearly $700 billion in economic contribution last year alone, and save consumers $130 billion each year.

“Our 4G success didn’t happen overnight: investment dollar by investment dollar, cell site by cell site, America’s wireless industry brought the benefits of high-speed mobile broadband to communities across America, creating  jobs, powering economic growth and spurring innovations that make our lives better,” said Meredith Attwell Baker, CTIA President and CEO. “Over the next decade, our emerging 5G economy will unleash even greater consumer benefits and maintain America’s position as the world’s innovation hub.”

Today, 4G networks are widely available and there are more 4G subscriptions in the United States than people. As the study shows, however, this was the result of ten years of gradual network improvements, industry investment, spectrum auctions and innovation.

Additional highlights include:

  • Between 2011 and 2019, U.S. wireless industry GDP grew 253%. In 2019, the industry contributed $690.5 billion to U.S. GDP, which would make America’s wireless industry the world’s 21st largest economy.
  • At the start of the decade, the wireless industry enabled 3.7 million U.S. jobs. By the end of the 4G decade, wireless-enabled jobs grew to 20.4 million—one out of every six U.S. jobs, which makes wireless the largest job contributor across all industries.
  • In 2010, an unlimited data, talk and text plan cost $113.87 on average for one line. By 2019, that same plan cost $64.95. That means U.S. subscribers now save $130 billion annually in wireless plan costs, without considering the added value of faster speeds, superior network availability and more powerful mobile devices available today.

“The report’s findings make clear that 4G’s impact in the U.S. can be felt across every meaningful measure, from job growth and network speeds to data traffic and GDP contribution,” said Roger Entner, Analyst and Founder of Recon Analytics. “The trajectory of U.S. 4G development should serve as a guide to consider—and to enable—the full transformational power of the coming 5G decade.”

Download this new report from Recon Analytics and CTIA.

Tales of two continents and the Internet During COVID-19

A few weeks ago, EU Commissioner Thierry Breton made headlines when he asked Netflix, Google’s YouTube and Disney to voluntarily reduce their video quality from High Definition to Standard Definition in order to “secure Internet access for all.” Is this an EU bureaucrat detached from reality gone wild or is there something more behind it? What most headlines did not report is that Thierry Breton is the former CEO of France Telecom now Orange, the 10th largest telecommunications provider in the world. By moving from HD to SD the data speeds needed to support streaming video declines by 80% from roughly 5 Mbit/s to 1 Mbit/s. To quote the eternal wisdom of Depeche Mode: Everything counts in large amounts. Especially when you multiply the reduction by 200 million households. If at the peak hour, half of the EU, roughly 100 million households, are watching streaming video and all of them are using SD instead of HD, then peak edge network load goes down by 400 million Mbit/sec or 400 TBit/second.

Europe’s largest internet exchange, DE-CIX is publishing its usage and performance data in real time for all its internet exchange points. Below is the 5-year traffic graph for Frankfurt from April 21, 2020s, the world’s largest internet exchange point:

On one hand, the impact of the Covid-19 quarantine is quite visible at the right. Peak usage went up by 50% from around 5.8 Tbit/s to 9.1 Tbit/s which is quite an increase that could cause alarm until you know that peak capacity is 58.4 Tbit/s. The concerns of Commissioner Breton cannot be in regard to the core internet backbone being in danger of potential breakdown. It has to be the edge network, since the core network is holding up well.

We know from the experience in the United States that the fiber and cable networks providing tens up to 1000 Mbit/s speeds are holding up well as traffic has increased. The problem arises at DSL networks, a technology that allows several Mbit/s data connections over copper wires, that often can only support 15 Mbit/s or less over short distances from a central office. Next generation VDSL can provide up to 200 Mbit/s over distances of less than 200 yards from a central office. The problem is that central offices are generally further apart than 200 yards and speeds fall off dramatically.

American telecommunications providers have invested heavily in moving beyond DSL and continue to invest heavily to expand their broadband offers. Congress and the Federal Communications Commission have dedicated billions more to improve access for every American at every point in the network, from last mile to the radio access network. This has not been the case in Europe.

A good proxy of how fast and developed a country’s broadband infrastructure is how much money the carriers have invested in the technologies and networks on a cumulative basis.

The OCED identified $944 billion was invested the EU telecommunications networks from 2002 to 2018 improving the connectivity of the EU’s 527 million citizens. Over the same time period, the OECD reports that the US invested $1.323 trillion in US telecom networks, covering 320 million Americans. Of these 320 million, 90% have access to fixed broadband internet service. From 2002 to 2018, the US accounted for 42% of global telecom investment among all 37 OECD states.

Picture 2

Source: OECD and UN Population Estimates, 2020 (https://www.oecd.org/sti/broadband/9b.Investment.xls)

When looking at the telecom investment per person in the US and EUR, the difference between the investment per person is stark.

Consistently, more than $200 per person is invested to connect people in the United States. In 2017 and 2018, the two most recent years available, American telecom companies have invested $291 and $290 per person, respectively. The average for the EU4 (Germany, France, Italy and Spain) was $150, about half of what is spent in the U.S. The spend in countries other than the big four has been even less.

In the Czech Republic, only $69 per person is invested in telecommunication infrastructure, in Estonia $70 and Portugal $73. Thus, Commissioner Breton called Netflix, Google, and Disney to ask them to throttle their traffic to ensure the maximum number of EU citizens could have access to the Internet; however, due to the US’ spectrum policy, efforts to speed deployment of mobile and fixed infrastructure and a more evolved, and light touch regulatory framework has produced a far superior broadband infrastructure in the US than compared to EU.