Paul Asel
16 min readApr 2, 2023

America’s AI Ultimatum: Forge Ahead or Fall Behind

This week an open letter signed by over 2,000 AI luminaries called for an immediate six-month moratorium on training giant AI systems. We view the letter as a call to arms rather than a cease fire. As a matter of national interest, America must accelerate AI development to secure its lead in artificial intelligence and develop trustworthy AI systems.

By Paul Asel and Chappy Asel

Source: Midjourney V5

Sometimes little happens in a decade. Sometimes a decade happens in a month.

ChatGPT has taken the world by storm heralding the prospects and perils of generative artificial intelligence to a mass audience for the first time. ChatGPT reached one million users in five days, fifteen times faster than any prior app, and now over 100 million have experienced the weird and wonderous musings of generative AI.

The frenetic pace of development accelerated this month as OpenAI upgraded ChatGPT with the release of GPT-4 and ChatGPT Plugins. Google announced Bard, their PaLM API, and integration with Google Workspaces. Microsoft integrated its chatbot into many of its existing products, including a new release of Bing Chat to all users. A plethora of startups such as Anthropic, Midjourney and Runway ML have rushed their latest models to market. Open-source models such as Stanford’s Alpaca, Databrick’s Dolly, and Cerebras’ GPTs are quickly proliferating the market. The progression of images by Midjourney in the following graphic illustrates the astounding speed with which generative AI systems are progressing.

Recent progress has prompted both bold claims and alarm. Highlighting the significance of the GPT-3 upgrade, Microsoft Research claims GPT-4 shows ‘sparks of artificial general intelligence.’ A report by Goldman Sachs this week predicts that generative AI could impact 300 million jobs by replacing up to 50% of their workload and increasing GDP by up to 7% in the next ten years.

Yet The Future of Life Institute published an open letter this week signed by over 2,000 AI luminaries calling for an immediate six-month moratorium on training giant AI systems more powerful than GPT-4. Chatbots have aroused political ire with claims of “woke AI” from right-wing activists while schools have scrambled to enact policies for student use of chatbots in homework assignments. The Supreme Court has heard arguments in Gonzales v. Google that could fundamentally alter the Internet by making online sites liable for content they host. Google’s hurried introduction of Bard cost the company $100 billion in market value when it shared inaccurate information in a promotional video. “This is going to be the content moderation wars on steroids,” observed Stanford law professor Evelyn Donek, an expert in online speech.

A Misguided Giant AI Moratorium

While consternation is understandable, calls for a moratorium are misguided. Instead, generative AI efforts must accelerate development to fix premature versions of models that are already widely available. An effective response will require a coordinated public and private effort to augment and rein in early generative AI releases and the data infrastructure on which they are built.

The call for a moratorium mirrors public responses to paradigm shifts over centuries. Copernican heliocentrism unleashed the Roman Inquisition in the sixteenth century. Church elders initially opposed Ben Franklin’s seemingly innocuous lightning rod fearing it was interfering with the ‘artillery of heaven.’ New media channels from radio to television to social media have aroused calls for bans. The recent Supreme Court debate over Section 230 of the 1996 Communications Decency Act illustrate well founded caution about how technology impacts media, current events and public discourse. As Thomas Kuhn observed in The Structure of Scientific Revolutions, vehement opposition is often a sign that a paradigmatic shift is underway.

Future of Life Institute president and MIT professor Max Tegmark is a prescient scholar on artificial intelligence. Shortly after Google published its seminal “Attention is all you need” research paper and just prior to the launch of GPT-1, Tegmark described a scenario eerily like ChatGPT in Life 3.0 to illustrate the public policy challenges raised by the prospect of artificial general intelligence[1] (AGI). Joining a long list of technology literati who have highlighted the risks of intelligence computing, Tegmark called for government to adopt early public policy measures to harness the potential of AGI while mitigating dystopian downside risks. His early warning unheeded, Tegmark has reminded us that the risks of fake media, echo chambers, and misinformation observed on the Internet may be perniciously magnified with generative AI.

As a signatory to the Future of Life letter, Elon Musk appreciates how damaging premature releases of consequential technology can be. Tesla Autopilot precipitated 273 reported accidents in 2021, more than any other automaker using autonomous driving technology. Yet Musk’s call for a moratorium on the development of AI seems disingenuous as Tesla continues to market Autopilot despite its sketchy track record.

Yet there are at least three reasons to oppose a moratorium on generative AI.

First, the letter is a call for inaction without a prescription for action. The letter does not explain what should be done by whom during a six month cease fire and suggests the moratorium could well extend beyond six months. If the moratorium is extended, the letter offers no guideposts to indicate when activities may safely resume. We risk entering a cave from which we may never emerge.

Second, cease fires are rarely uniformly observed. There are too many open-source generative AI models available to effectively monitor a moratorium. While we may agree that the large language models were prematurely released, a moratorium keeps faulty versions in the wild. A cease fire may be honored by the white hats but ignored by the black hats.

Third, the rushed release of new machine learning models has unleashed an arms race among competing countries as well as rival firms. The global race for AI leadership is too important for the U.S. to adopt a unilateral cease fire. Vladimir Putin has said whoever leads in AI would “rule the world.” The Russian invasion of Ukraine has been called the first war with large scale cyber operations. Xi Jingping has publicly vowed that China will lead the world in AI by 2030. China has accelerated AI research and has recently overtaken the U.S. in AI patent applications and AI journal citations.

The United States has responded with the National AI Initiative Act of 2020 to accelerate AI research and ensure U.S. leadership in artificial intelligence. The U.S. reinforced this initiative with the U.S. Innovation and Competition Act of 2021. We believe pausing advanced AI research is antithetical to U.S. stated objectives. Instead, we must accelerate AI development through a coordinated effort to build trustworthy artificial intelligence.

Accelerating AI Development

The open letter from the Future of Life Institute may be better understood as a call to arms rather than a cease fire. With his call six years ago in Life 3.0 for prescient AI policy largely ignored, Max Tegmark is again calling for oversight on technology developments that will have profound implications for global geopolitics and national affairs. OpenAI — creators of ChatGPT, GPT-4, and Dall-E 2 — was founded to develop AGI safely and responsibly. Current generative AI systems share Tegmark’s and OpenAI’s positive intent, yet they are subject to pernicious manipulation and there is no assurance that new AI entrants will adhere to OpenAI’s laudable mission.

We share both the optimism for generative AI’s potential and concern for how it may be misused. OpenAI CEO Sam Altman has called for coordination among firms leading AGI efforts and an effective regulatory framework governing use of artificial intelligence. Altman has explained that OpenAI’s incremental approach toward AGI accelerates development with a “tight feedback loop for rapid learning and careful iteration” while giving policymakers time to see what is happening, adapt regulation, and “for people collectively to figure out what they want while the stakes are relatively low.” We agree with this approach.

AGI is too important and influential to be left entirely to the private sector. Though rarely acknowledged, the U.S. has a long history of fruitful technology collaboration between the public and private sector. U.S. military leadership and the race to space has been a marriage of government initiative, funding research and private sector ingenuity. The U.S. government funded over 90% of semiconductor research and development into the 1970s until volume production reduced price points to a level appropriate for commercial use. The Internet emerged through DARPA funded research during the Cold War.

American innovation is a catalyst for growth and the crown jewel of our economic system. It thrives when nurtured yet has faltered when ignored. Europe surged to the lead in wireless after the European Union mandated the GSM standard across its countries in 1987 while U.S. firms experimented with and promoted competing standards. The U.S. today is left bereft of a local wireless network provider. The absence of standards for drone use in public airspaces during the past decade has bequeathed global leadership in drone technology to China. Our fragmentary, state-by-state policy governing of autonomous technology similarly risks yielding global market leadership.

A call for an AI moratorium in the absence of clear standards risks a loss of leadership in this critical technology. Let us not repeat mistakes made in wireless technology and drones. Rather, national interests are best served by accelerating AI development with a collaborative effort that treats data as infrastructure. Coordinated efforts to build our data infrastructure should ensure data integrity, data privacy, and shared data access for appropriate purposes.

The OECD has developed a framework describing the lifecycle and key dimensions of an AI system. Our recommendations cover the right half of the diagram, which we believe are salient at this formative stage in the development of AI.

National Institute of Science & Technology: Modified from OECD Framework for the Classification of AI systems (2022). The two inner circles show key dimensions of the AI system, and the outer circle shows AI lifecycle stages.

Data as Infrastructure

The gestalt users experience with chatbot responses reflect a synthesis of the bounty of underlying data with which they are trained on. Google, Microsoft, Amazon, and others oversee vast pools of data — much of it proprietary — that provide significant advantages versus aspiring startups and open-source initiatives. An omniscient AGI system could intelligently access and process all the world’s data, yet noisy, incomplete data limit current AI systems.

AI is a winner-take-most paradigm. As machine learning algorithms become more capable and pervasive, the race to AGI will be determined in large part by who has the largest data lake. Companies like OpenAI recognize this and have developed data flywheels, compounding returns by using past chatbot conversations as training data for models in development. Microsoft’s $10 billion investment offers an alliance with a tech major and enhances scalability through access to Microsoft Azure.

Yet no private company can match the data gathering capabilities of a nation state like China. In AI Superpowers, Kai-Fu Lee describes China as the ‘Saudi Arabia of Data.’ Under the National Security Law of 2017, China requires its citizens and companies to support its intelligence gathering operations. While acknowledging the United States’ superior technological expertise in the near term, Kai-Fu Lee forecasts that China will ultimately win the AI war through its superior ability to amass, centralize, and leverage data and compute resources.

Data is the new infrastructure. The U.S. Infrastructure Investment and Jobs Act contemplates investment in roads, airports, railways, broadband, clean energy, and clean water, yet it overlooks the most critical and strategic asset in the race to AGI. Data should be a central part of any infrastructure plan.

Data as a Utility

According to White & Case, a leading Washington-based law firm, “there is no single data protection legislation in the United States. Rather, a jumble of hundreds of laws at both the federal and state levels serve to protect the personal data of U.S. residents.” Many of the governing regulations predate the Internet.

Fragmentary regulation permits distributed data, which exacerbate cybersecurity threats and privacy concerns. Though the U.S. government cybersecurity budget for is $15.6 billion and U.S. companies spend over $20 billion for cybersecurity insurance, the federal government lacks a cohesive data strategy.

In the U.S., private data has become a public resource. As Harvard professor emeritus Shoshana Zuboff described in The Age of Surveillance Capitalism, companies routinely troll citizens in “an economic system built on the secret extraction and manipulation of human data.” Google and Facebook hoover personal data from disparate sources and glean over $400 billion annually — $1000 for every U.S. citizen — in advertising revenue from insights we inadvertently divulge. These companies know more about us than our most intimate friends. They may well understand us better than we understand ourselves.

As AI becomes more powerful, safeguarding privacy is more pressing. Artificial intelligence escalates the ability to extract and exploit private data. That the global ransomware market is projected to grow from $20 billion today to $74 billion by 2030 underscores our increasing vulnerability to nefarious actors at home and abroad.

In the absence of a coherent U.S. data policy, the European Union has emerged as the global champion of data privacy with the General Data Protection Regulation (GDPR) in 2018. In the past three years, California, Maine, Tennessee, Utah and Virginia have enacted state laws that adopt GDPR guidelines to varying degrees. While admirable, these laws add bewildering complexity and increases compliance costs for companies operating nationally and globally. The U.S. should bolster and simplify its data privacy laws with a federal policy akin to GDPR in Europe and these five states.

China has taken a different, techno-utilitarian approach. The National Intelligence Law enacted in 2017 requires all organizations and citizens to support, assist and cooperate with Chinese national intelligence efforts, including companies operating abroad. Though China quietly disbarred Google and Facebook from operating in China over a decade ago, the U.S. has hesitated to secure private citizen’s data from Chinese companies such as TikTok.

Much as the White House focused on clean energy and clean water, it should prioritize “clean data”. Statistica estimates we will generate 120 zettabytes of data globally in 2023, more than sixty times the data exhaust produced in 2010. The top ten data centers operate over 1,250 facilities around the world. The two largest are in China. The Citadel Campus in Nevada is the third world’s largest data center and largest in North America. When planned expansion is complete, the Citadel Campus should have capacity for at least 150 exabytes of data — 7,000 times larger than the Library of Congress, which has nearly one billion data files. A considerable corpus of personal information concerning United States citizens is sequestered within the Citadel Campus and at similar facilities located around the world.

The Citadel Campus is the largest data center in the U.S. with nearly 8 million square feet of data storage space with plans to expand to 17.4 million square feet to become the largest data center in the world.

The U.S. should regulate data as a utility. A common data framework should govern all data gleaned from U.S. citizens or activities regardless of where the data are stored. Personal data should be owned by the individual, not by whoever can gather it. With expressed permission of the individual, personal data may be anonymized for general purposes or used to enhance personalized services. Anonymized data should be aggregated into large data lakes with common data standards and cybersecurity safeguards. Such data would be available for authorized general use, including for artificial intelligence purposes.

A common ‘data as a utility’ framework would reduce cybersecurity risks and associated costs. It would improve privacy standards, empowering citizens with insights into the use of their personal data. It would also improve innovation and accelerate artificial intelligence initiatives by offering companies large and small access to a vast pool of anonymized, cleansed data — larger than any single repository of data held today.

Source: Midjourney V5

AI Model: Using Narrow AI to Moderate General AI

To fulfill their initial intent, OpenAI and others must develop trusted AI models, which have aroused skepticism due to the unexpected, intolerant, and seemingly biased results they occasionally produce. Yet we believe that sponsors of ChatGPT, Bing Chat, and Bard will more readily resolve current limitations of their minimum viable products when exposed to myriad user inputs than when sequestered in a lab under a moratorium and they are more likely to assuage public concern by coordinating efforts than searching for independent solutions.

AI systems are inherently socio-technical in that they are influenced by societal dynamics and human behaviors. As models are trained on evolving data, output is subject to change significantly and unexpectedly affecting system functionality and trust. By employing reinforcement learning from human feedback (RLHF), chatbots such as ChatGPT are inadvertently prone to codify and propagate human biases.

Generative models produce media content, which suggests that regulatory standards for media companies may apply to these vendors. Yet as Section 230 demonstrates, moderating artificial intelligence is tricky as designers influence behavior but do not control the content itself. Much as a dog trainer may influence but not control a puppy, AI designers may establish guardrails but cannot fully anticipate how chatbots will behave. Human influence attenuates as technology moves from AI to AGI.

Eliminating aberrant results altogether would be counterproductive. Doing so with a fully specified rules-based overlay may neuter the dynamism of an AI system. A rules-based system will never anticipate all the corner cases and will result in a pockmarked patch fixing approach characteristic of the cybersecurity industry. Top-down rules-based approaches also raise questions on who would adjudicate the bounds of acceptability. In the wrong hands, top-down approaches may appear arbitrary undermining trust in our democratic tradition. Instead, companies have a shared interest in developing standards for moderating content and should coordinate efforts.

Narrow AI may be used to moderate general AI better than a rules-based approach — both in managing boundary conditions and parameter moderation to prevent manipulative distortions. Narrow AI approaches applies augmented intelligence by using human input to describe the principles that govern boundary conditions and the degree to which parameters may shift in response to classes of user inputs.

Applying a black box system to govern another black box system is not a fail-safe solution as it may initially compound unexpected outcomes. Human oversight is required, but these companion AI systems will better anticipate and moderate corner cases over time, which will reduce the need for human intervention as these systems expand.

Concluding Thoughts

Artificial intelligence is a fundamental paradigm shift that alters our common conceptions of fundamental American values. How does free speech apply to generative AI? How do we rethink privacy in a world in which data sensors have infiltrated our most intimate chambers and we leave data exhaust with every step and breath? How may artificial intelligence impact civil liberties and the uniquely American view of rugged individualism? Do large language model purveyors fall under the same output responsibility as media companies under Section 230?

Data is intensely private. Yet in an AI arms race, data is also a public good. Our data as infrastructure proposal, including ‘data as a utility’, requires a higher degree of coordination, oversight, and data moderation. We believe our proposal for data as a utility can be implemented in a way that improves privacy, confidentiality, and cybersecurity. An overarching data strategy, including adoption of GDPR oriented policy, will reinforce privacy and lower the cost of doing business relative to our current fragmentary data system.

The energy and telecom industries are precedents for data as a utility. While managing data has distinctive features, they may serve as a guide for best practices and pitfalls to avoid. Our proposal has distinctive features from a traditional utility model as it leverages existing data infrastructure and retains ownership by the current industry leaders. At the same time, it aggregates and manages data in a centralized way to maximize the benefits of privacy, confidentiality, cybersecurity, and AI compute capacity.

Of equal importance, the data utility and narrow AI modeling proposals accelerate the development to retain AI leadership consistent with the National AI Initiative Act of 2020 objectives. We see the open letter from the Future of Life Institute as a call to arms rather than a cease fire. It is a call for collective action to raise public policy awareness of AI risks.

Our ultimate national interest is to accelerate AI progress so that we may forge ahead rather than fall behind.

Background on the Authors:

Paul Asel

A 35-year venture capital industry veteran, Paul is co-founder and Managing Member of NGP Capital, a global venture firm with $1.7 billion assets under management and offices in Europe, China and the United States. Paul has a successful investment track record over 25 years. He has realized over 20 successful exits, including 5 IPOs and four exits exceeding $1 billion of which UCWeb and Ganji are two of the largest technology acquisitions in China. Peviously, Paul led technology investments in Southeast Asia at the International Finance Corporation. He received an MBA from Stanford University and a BA from Dartmouth College.

Paul is a member of Global Corporate Venturing Leadership Society, an Advisory Committee member at National Venture Capital Association, and an Adjunct Professor at the George Mason Schar School of Policy and Government. Paul is co-author of Upward Bound: Lessons of How Nine Leaders Achieved their Summits and De Gruyter Handbook of Entrepreneurial Finance. He has published in Barron’s, CB Insights, Forbes, Global Corporate Venturing, The Journal of Private Equity, Knowledge@Wharton, Stanford Business Magazine, TechCrunch and Venture Capital Journal. His commentary has also appeared in Business Standard, Forbes, Fortune, Reuters and The Wall Street Journal among many others. You can follow Paul on Twitter and Medium at @PaulAsel.

Chappy Asel

A successful entrepreneur with an expansive technical and operational background built across 10+ years of experience. Chappy has founded and developed multiple top-rated mobile applications. He is currently exploring applications of generative AI in AR/VR at Apple. He is the founder of the GAI Collective: a community of founders, funders, and thought leaders built around a shared passion for generative AI. You can follow Chappy on Twitter and Medium at @ChappyAsel.

[1] Artificial general intelligence is the ability of an intelligent agent to understand or learn any intellectual task that human beings can. In this article, I use AGI to refer to this strong form of artificial intelligence and AI to refer to intelligent systems that work well for narrower use cases.

Paul Asel

Managing Partner @ngpcapital, a global VC with $1.6B AUM. Portfolio: Lime, Zum, SVT, Workfusion ... Writes about innovation, VC, AI, entrepreneurship.