HomeCompleting the American RevolutionEducationAtlas University
No items found.
Completing the American Revolution

Completing the American Revolution

10 Mins
June 23, 2010

"When forced to assume [self-government], we were novices in its science. Its principles and forms had entered little into our former education. We established, however, some, although not all its important principles."
                              --Thomas Jefferson to John Cartwright, 1824.

It was in honor of the 50th anniversary of the American Revolution that Thomas Jefferson wrote, in his last public letter: “May it be to the world, what I believe it will be, (to some parts sooner, to others later, but finally to all,) the signal of arousing men to burst the chains under which monkish ignorance and superstition had persuaded them to bind themselves, and to assume the blessings and security of self-government. . . . . All eyes are opened, or opening, to the rights of man.”

It is noteworthy that Jefferson qualified his pronouncement: “All eyes are opened, or opening, to the rights of man.” Notwithstanding his characteristic optimism, Jefferson shared with his fellow Americans of the founding generation a realization that the Revolution they began in 1776 was still incomplete a half-century later. During his presidency and retirement years, he continued to believe that America had a mission to prove to the world “the degree of freedom and self-government in which a society may venture to leave its individual members.” Indeed, as author of the Declaration of Independence, he was perhaps even more aware of how imperfectly the ideals of that founding document had been realized in American legal and political institutions. And he certainly was aware of the need for future constitutional change – of the need for laws and institutions to advance “with the progress of the human mind.”

Ayn Rand clearly shared with Jefferson and other Founders the hope that America would serve as a model for the rest of the world. She began the conclusion to her March 6, 1974 address to the cadets at the U.S. Military Academy at West Point by saying, “The United States of America is the greatest, the noblest and, in its original founding principles, the only moral country in the history of the world.” Although her magnificent novel, Atlas Shrugged , depicts the United States in decline, at various places throughout the book Rand reminds her readers of the nobility of America’s founding ideals. One of the novel’s principal heroes, Francisco d’Anconia, describes this country as one “built on the supremacy of reason – and for one magnificent century, it redeemed the world.”

Through the uniquely effective mode of communication afforded by fiction-writing, particularly in the form of a novel, Rand presents her readers with a vision of America as it is today as well as a vision of what it could—and should—be. It is a utopian novel, in a sense; but unlike other classic works in that genre, it is not merely a radical critique of the status quo. Here it may be helpful to note the relevance of the symbolism of the Atlantis myth—a lost land populated by heroes—which Rand employs as a key theme throughout the novel. Atlas Shrugged itself is Atlantis: its critique of modern America is presented in terms of the degree to which the nation has fallen short of its founders' vision, which is also Rand's. In this sense, the novel is at once radical and conservative—much like the American Revolution itself, as discussed below. Perhaps this feature of the novel explains both the breadth and the depth of its appeal, at least to American readers: Americans who read Atlas Shrugged sense that the radically different philosophical vision Rand offers in the book is not entirely new but is rather the fulfillment of the Founders' vision, one that somehow had become lost by the last half of the twentieth century.

Rand was also aware that the American Revolution had been incomplete, and this awareness was part of her purpose for writing Atlas Shrugged . As she stated in her essay “For the New Intellectual,” just a few years after the novel’s publication:

    The world crisis of today is a moral crisis--and nothing less than a moral revolution can resolve it: a moral revolution to sanction and complete the political achievement of the American Revolution.

In so identifying the “moral crisis” of today and the “moral revolution” needed to resolve it, Rand was echoing the statements made by two of the principal heroes of Atlas Shrugged, John Galt and Francisco d’Anconia. Indeed, the heroes of the novel may be seen, essentially, as the Patriot leaders of a second American Revolution, to complete the first. Atlas Shrugged is a significant book in many respects; one of its most significant aspects is the way Rand uses the novel to show us not only that the American Revolution was incomplete but also we must do to complete the Revolution—that is, to complete the unfinished work of 1776 and the hope that it represents to the world. This article discusses the historical background necessary to a full understanding of how the novel accomplishes this purpose.

Part One discusses the truly radical nature of the Revolution: the philosophy of government of America's Founders, who put the rights of the individual first and then attempted to design a system of government that would safeguard rather than destroy those rights. This revolution in the philosophy of government was neither sudden nor rapid. It did not happen on July 4, 1776, with the adoption of the Declaration of Independence, for that was the culmination of a series of events traceable back to the founding of the English colonies in America. Nor was the revolution fully accomplished with the mere declaration of American independence: it required not only the successful waging of the Revolutionary War, but also the successful establishment and maintenance of new constitutions to help safeguard the Founders' vision of limited government.

That vision, however, was quite imperfect; and the Founders' revolution in the philosophy of government was incomplete, as the dramatic growth in the size and pervasiveness of governmental power (at all levels, and particularly the national government) in the twentieth century so vividly has illustrated. The American Revolution was incomplete – and the Founders' carefully devised constitutions failed – because the Founders' generation had no consensus about where exactly to draw the line between individual liberty and the coercive power of law, especially in the realm of economics. They failed, in short, to have a coherent theory of individual rights. This failure can be explained by two "gaps" in American thought, one in ethics and the other in politics.

Part Two discusses the first aspect in which the American Revolution was incomplete: the non-existent moral revolution. America's Founders compromised the premises on which their individualistic political philosophy rested by continuing to adhere to a profoundly anti-individualistic moral code, rooted in Judeo-Christian religion. Because that anti-individualistic moral code remained not only dominant but also virtually unchallenged in early American culture and intellectual thought, Americans continued to regard capitalism, money, and the profit motive as base, immoral, and even downright evil.

Part Three explores the second aspect in which the American Revolution was incomplete: the incomplete revolution in political thought and the law. Notwithstanding the Founders’ efforts to “Americanize” their political and legal systems, many ideas and institutions inherited from England—from a feudal, paternalistic society that by the eighteenth century had only partially shifted to a capitalist, individualistic society—persisted in early American politics and law. This section focuses on two important illustrations of the persistence of Old World, paternalistic, anti-capitalist or anti-individualist, notions in American politics and the law: the concepts of the so-called "public interest" and of "monopoly." These two concepts lie at the heart of government regulation of businesses "affected with the public interest" and the antitrust laws—the regulations and laws which today continue to severely limit the freedom of American businessmen and which formed the real-world inspiration for the horror stories Rand presents in Atlas Shrugged.

Part Three also briefly discusses the failure of American constitutional law to safeguard individual rights against the rise of the twentieth century regulatory and welfare state. The so-called “New Deal revolution” on the U.S. Supreme Court in the late 1930s marked the modern Court’s failure to enforce the Constitution’s limits on the powers of government and its protection of economic liberty and property rights.

Finally, Part Four briefly discusses what must be done to complete the American Revolution, and the relevance of Atlas Shrugged and of the Objectivist philosophy it presents, to accomplish that end.


The American Revolution was unlike any other great revolution in human history. Some scholars have characterized it as conservative, for—apart from the long, bloody war for independence from Great Britain—it lacked the cataclysmic social upheaval that characterized the later French and Russian revolutions. Yet the changes it brought to American society, governmental institutions, and philosophical thought were profound. Despite its apparent conservatism, the American Revolution was truly radical, in the literal sense of the term. Radical derives from the Latin word radix, meaning "root, base, foundation"; to be radical is to get to the root of the matter. The Revolutionaries of 1776, although influenced by a variety of classical political writings going as far back as Aristotle, managed to transcend much of the dogma of traditional western political thought and to profoundly rethink the origins, purpose, and limitations of government.

America's Founders established—for the first time in the history of the world—a society whose government was founded on recognition of the inherent, natural, and inalienable rights of the individual. They asserted the "self-evident" truths that Thomas Jefferson had stated in the Declaration of Independence: that "all men are created equal” and are endowed with “inherent and inalienable rights" of "life, liberty, & the pursuit of happiness"; that "to secure these rights, governments are instituted among men, deriving their just powers from the consent of the governed"; and that "whenever any form of government becomes destructive of these ends, it is the right of the people to alter or to abolish it."

A good society, the Founders believed, would have few laws.

The Founders institutionalized these principles by establishing written constitutions, founded on "the consent of the governed," and containing various institutional checks on the power of government designed to prevent it from being abused, for the Founders understood that, paradoxically, it was government—which was created to protect, or "secure," individual rights—that poses the greatest danger to them. The reason was the unique nature of political power: that government, alone of all institutions in society, may legitimately use force to achieve its ends. A good society, the Founders believed, would have few laws—laws that were clear to, and respected by, the people. Accordingly, they sought to create a "new science of politics" that not only checked the power of government, through constitutions, but also minimized the role of government (at all levels, but especially the national government) to a few, essential and legitimate functions.

These truly revolutionary changes did not all suddenly happen in 1776, however. The Declaration of Independence was the culmination of a series of events which may be traced back to the founding of the English colonies in North America. "What do we mean by the American Revolution?" John Adams rhetorically asked one of his correspondents, late in life. "The Revolution was in the minds and hearts of the people; a change in their religious sentiments, of their duties and obligations. . . . This radical change in the principles, opinions, sentiments, and affections of the people was the real American Revolution."

Although they considered themselves loyal subjects of the British king, colonial Americans were separated from their Old World countrymen by more than geography. The English colonies in North America each had its own unique history, but all had certain basic features in common. They were settled by people who, for one reason or another, were leaving Europe to find a new life in the wild lands across the Atlantic Ocean; to the settlers, it was quite literally a "New World." Some of the settlers were dissenters from England's established church—both Catholics and radical Protestant nonconformists—and thus came to America for religious freedom, or at least a greater degree of religious freedom than the laws of England permitted. Other settlers came to America seeking wealth: to them, the wilderness across the seas—as to later generations of Americans, the wilderness across the mountains, in the trans-Appalachian West—represented economic opportunity. Like the religious dissenters, those who came to America for economic reasons also sought a greater degree of freedom than was permitted under the stifling paternalism of English law. Whatever their reasons for emigrating to America, the English settlers generally might be regarded as a kind of distillate of people who somehow did not fit – or did not want to fit – in English society.

Significantly, the early colonization of North America coincided with one of the most turbulent time periods in English history: the seventeenth century, a century of revolution, which included not only the English Revolution, or Civil War, in mid-century but also the so-called Glorious Revolution of 1688-89, as well as the unstable years of the early eighteenth century following the Hanoverian succession. Most dramatically, this era witnessed the trial and execution of King Charles I and, for the eleven years of the Commonwealth (1649-60), England transformed from a monarchy to a republican form of government. Accompanying the political unrest of the times was a remarkably rich ferment of ideas. Writers such as Thomas Hobbes, Algernon Sidney, and John Locke questioned basic assumptions about the origin, purpose, and structure of government: Why have government at all? Which form of government is best, and why?

The settlers were a people who somehow did not fit-or did not want to fit-in English society.

Some, like Sidney, who was executed for treason in 1683, even lost their lives for their boldness in challenging orthodoxy. Political stability returned with the post-1689 settlement, which established the modern English constitutional system, with powers of the monarch greatly circumscribed and subordinated to those of Parliament. Radical dissent, once started, was not easily stopped, however; and in the 18th century new generations of dissenters from mainstream politics – the second and third generations of the "Commonwealthmen," or English radical Whigs, whom historian Caroline Robbins and other scholars have described – found a ready audience for their ideas in a small minority of their fellow Englishmen and in far greater numbers of their countrymen across the Atlantic.

The founding of the English colonies in North America and their evolution into mature political societies also corresponded in time with perhaps the most significant philosophical movement of the modern era, the Enlightenment. The American colonists were also profoundly influenced by the writings of Enlightenment rationalists, whose texts were cited along with those of the English radical Whigs, particularly when Americans argued for the legal recognition of their natural rights. The thinkers of the eighteenth century Scottish Enlightenment—Adam Ferguson, David Hume, Adam Smith, and other lesser writers—also influenced American understanding of the social order and limited government.

The Declaration of Independence itself directly reflected the influence of Enlightenment ideas on the leaders of American Revolution. In drafting the Declaration, Jefferson employed the language of eighteenth century logic and rhetoric to present the argument for American independence; indeed, the overall argument of the Declaration is in the form of a syllogism—with a major premise, minor premise, and conclusion. Moreover, ideas expressed in the Declaration were given added persuasive power by their adherence to the best contemporary standards of mathematical and scientific demonstration; for example, in calling the key propositions of the major premise "self-evident truths," Jefferson used a term with a precise, technical meaning, which told his audience that they were like the axioms of Newtonian science. The grievances against George III in the main body of the Declaration were not only tyrannical acts which would justify rebellion against a monarch, under established principles of English constitutionalism, but also complaints that the King, in conspiracy with "others" (namely, his ministers and Parliament) had deprived Americans of their natural rights, including economic freedom.

The American Revolution was not quite radical enough.

As historian Gordon Wood has shown, the American Revolution was far more radical than commonly believed. Wood considers the Revolution to be "as radical as any revolution in history" as well as "the most radical and most far-reaching event in American history," altering not only the form of government—by eliminating monarchy and creating republics—but also Americans' view of governmental power. "Most important," he adds, "it made the interests and prosperity of ordinary people – their pursuits of happiness – the goal of society and government."

By rejecting the British monarchical system, America’s founders also rejected the paternalism through which the British system operated in the realms of law and politics. The rejection of paternalism was manifest in many developments in Revolutionary-era society, among them the rise of contracts and even the growing popularity of laissez-faire economics, perhaps best illustrated by the Philadelphia merchants' opposition to price controls in 1777-78. Moreover, Wood adds, "[t]he Revolution did not merely create a political and legal environment conducive to economic expansion; it also released powerful popular entrepreneurial and commercial energies that few realized existed and transformed the economic landscape of the country."

The far-reaching social changes that came into being with the American Revolution also were accompanied by correspondingly significant changes in law and constitutionalism. With independence, the American legal system—and particularly the constitutional system—was free to depart dramatically from its English roots. "We have it in our power to begin the world over again," wrote Thomas Paine, succinctly describing the unprecedented opportunity Americans had after 1776 to frame new forms of government with written constitutions.

The first American constitutions were framed largely by a process of trial and error, as their framers experimented with a variety of devices to check governmental power, both to prevent it from being abused and to safeguard the rights of individuals. As previously noted, the Founders understood the essential paradox of government: that the very institution created to secure individual rights itself posed the greatest danger to them. Influenced by the English radical Whig political tradition, they understood that government, by its very nature – given its monopoly on the legitimate use of force in society—inherently threatened liberty and would abuse its power unless constrained by institutional checks.

Accordingly, they incorporated into the early American constitutions various devices for limiting power and safeguarding against its abuse. These included federalism (the division of powers between the national government and the states), the principle of separation of powers (at each level of government, separating its powers among three distinct and independent functional branches, legislative, executive, and judicial), frequent elections and "rotation in office" (what we call "term limits"), explicit rights guarantees in bills of rights, and the power of the people both to ratify and to amend the constitution.

The framers of the federal Constitution of 1787 benefited from the experience of Congress’s governance under our first national constitution, the Articles of Confederation, as well as the experience of the majority of the states, which had framed state constitutions during the period between 1776 and 1787. Hence, the Constitution of the United States utilized more of these devices for limiting power or safeguarding rights than did the early state constitutions, which were framed at a time when Americans were, in Jefferson’s words, “novices in the science of government.” State constitutions, for example, generally failed to enumerate legislative powers, vesting state legislatures with the broad, loosely-defined regulatory power known as the “police power.” Although most did follow the principle of separation of powers, they generally did not supplement it with checks and balances, as the federal Constitution did. Only in one respect was the federal Constitution lacking—the document as adopted by the Constitutional Convention failed to include a separate bill of rights—but that omission was quickly remedied by the addition of the first ten amendments to the Constitution.

Even with their new constitutions, however, Americans of the early national period struggled to fully implement in politics and law the radical changes resulting from American independence. During the 1790s, the first decade of national government under the new Constitution of the United States, the two-party American political system emerged from Americans’ competing visions about how to “secure” the Revolution. When the opposition party led by Thomas Jefferson and James Madison—their self-described “republican” party—defeated the previously-dominant Federalist Party in the elections of 1800, Jefferson called their victory “the revolution of 1800.” He saw it as a vindication of the American Revolution, “as real a revolution in the principles of our government as that of 1776 was in its form.”

The Federalists had failed to fully grasp the radical promise of the American Revolution, or to fully reject the English paternalistic view of government; their principles, rooted in what Jefferson called the “doctrines of Europe,” emphasized use of the coercive power of government to order society. The Jeffersonian Republicans, in contrast, distrusted political power (even when they wielded it) and emphasized instead the ability of people to govern themselves and of a free-market society to order itself. The Republicans’ political ascendancy after 1801—the Federalists became a permanent minority party at the national level and disappeared altogether by the 1820s, the “era of good feelings”—signaled to Jefferson a magnificent opportunity for America. Its mission, as he frequently noted in his writings during the first two and half decades of the nineteenth century, was to prove to the world “what is the degree of freedom and self-government in which a society may venture to leave it’s [sic] individual members.”

When the young French aristocrat Alexis de Tocqueville visited the United States in 1831-32, he was so struck by the profound differences between America and Europe that he wrote a book, his famous Democracy in America, to warn his countrymen of the tremendous changes that the American Revolution had wrought. He began the book by noting that among those differences "nothing struck me more forcibly than the general equality of condition among the people." He described the people in America as free, independent individuals who not only held equal rights under the law but also related to one another as social equals—in vivid contrast to his native society, where notwithstanding the egalitarian impulses of the French Revolution, people still thought in terms of rigid social classes. Indeed, he coined the term individualism to describe Americans’ attitude about themselves: “They owe nothing to any man, they expect nothing from any man; they acquire the habit of always considering themselves as standing alone, and they are apt to imagine that their whole destiny is in their own hands.”

America’s Founders indeed had radically transformed traditional ideas about the individual, society, and the role of government; their new nation offered proof to the world that it was possible for people to create, in Jefferson’s words, something “new under the sun.” Notwithstanding the profound changes they had made in politics and law—particularly with the novelty of written constitutions, with various devices for limiting governmental power and keeping it accountable to the people—the Founders’ revolution was not complete. In many important ways, they failed to fully transcend the Old World from which they had rebelled. Not only in law and politics, but in other important fields, the American Revolution was not quite radical enough. The result was that the principles of 1776, as stated in the Declaration of Independence, were quite imperfectly realized in American politics and law. Government, which was supposed to be instituted in order to “secure” the natural rights of the individual, continued to pose the greatest threat to those rights, especially in the sphere of economics. As the Industrial Revolution swept over the United States during the late nineteenth century, the rights of all Americans—including the businessmen who were bringing about the industrialization of America—were only marginally more secure here than they were in Europe. The mixed ideology in American political thought of the Founding period and the nineteenth century made possible the so-called "mixed economy" of the twentieth century.


Unfortunately, the American political revolution was not accompanied by a revolution in moral philosophy. Many of the Founders adhered to traditional Judeo-Christian ethics based on altruism. Others, as "free-thinking" students of the Scottish Enlightenment—men such as Thomas Jefferson—instead naively believed that humans had an instinctive "moral sense" that vaguely inculcated one's moral "duties" to others. Under either the traditional or the "enlightened" ethics, it was regarded as "immoral" for an individual to pursue his own self-interest, even if he did so in such a way as not to harm others or even to interfere with the equal freedom of others to do the same. To be "moral," it was assumed, one must sacrifice one's self-interest to the "needs" of others.

To be "moral," was it was assumed, one must sacrifice one's self-interest to the "needs" of others.

Such a moral philosophy—rooted in older visions of a homogeneous communitarian society—was hardly compatible with the reality of American capitalism: the free, robust society of energetic, enterprising individuals, mutually profiting from each others' pursuit of their self-interests—the society described in Tocqueville’s Democracy in America. Indeed, just as Tocqueville had to coin the term individualism to describe the unique way he observed Americans relating to one another in society, he also invented a concept that he called “the principle of interest rightly understood” to describe Americans’ moral code. As Tocqueville understood it, this principle moderated or tempered American individualism; it produced “no great acts of self-sacrifice” but prompted “daily small acts of self-denial.”

The persistent and pervasive influence of the Judeo-Christian altruistic moral code in American society should not be surprising, given the strong hold that Christian religion had on most Americans, particularly after the Second Great Awakening and other religious revivals in the nineteenth century. These revival movements were followed by the so-called “social gospel” movement, which sought to give Christianity a greater “social relevance” by preaching the ethics of Jesus, the values of altruism and self-sacrifice. Preachers of the social gospel were among the leading proponents of the regulatory/welfare state—and the leading critics of individualism.

When the great American classical liberal philosopher, William Graham Sumner, defended American capitalism in the late nineteenth century—including not only the free-market system but also specifically the rights of capitalists to keep the wealth they had earned—he conceded that it was difficult for Americans to overcome what he called “the old ecclesiastical prejudice in favor of the poor against the rich.” Without directly challenging the traditional Christian altruistic moral code, Sumner nevertheless suggested that in ethics as well as in public policy, American society needed a new code, based on his vision of the Golden Rule: “Laissez-faire,” or translated “into blunt English,” as he put it, “Mind your own business”—the “doctrine of liberty” and personal responsibility.


To paraphrase a late-eighteenth century English radical Whig: America's Founding Fathers were provident, but not provident enough. They created written constitutions with various devices designed to check the abuse of power and to safeguard individual rights; but their handiwork was imperfect in many ways. As noted in Part I, the Founders were, in Jefferson's words, "novices in the science of government"; the early American constitutions—including the United States Constitution of 1789, as amended by the Bill of Rights in 1791—were often more the product of experimentation, of trial and error, or even of political compromise than of deliberate design. Even after Jefferson's so-called "revolution of 1800" and the reinvigoration of first principles he believed it represented, there were many unresolved fundamental problems and inconsistencies in American government and law.

Economic liberty and property rights were imperfectly protected by American constitutions, both state and federal.

Among the most important of these were the various ways in which economic liberty and property rights were imperfectly protected by American constitutions, both state and federal. Notwithstanding explicit protections of liberty and property rights, generally—most notably, under the Fifth Amendment due process clause of the federal constitution and its equivalent provision in most state constitutions—American constitutional law in the nineteenth century permitted both state and federal governments to regulate business in various ways reminiscent of the old English paternalistic system. As the United States became more industrialized by the late nineteenth century, government regulation of business expanded in scope, both quantitatively and qualitatively, under two general rationales: government regulation of businesses "affected with the public interest" and government prohibition of “monopolies” through the antitrust laws.


In American political thought, coexistent with the dominant radical Whig, or libertarian, political tradition—with its emphasis on individual rights—there was an older, competing tradition. This tradition, which scholars have called the "civic republican" tradition, traceable back to ancient Rome, preached civic "virtue" as consisting in the subordination of self-interest to the "public interest," or "common good." This notion was central to sixteenth and seventeenth century paternalistic theories of government. An interesting example in English law is the 1606 decision by the Exchequer Court in Bate's Case, upholding the power of King James I, without consent of Parliament, to impose a tax on imported goods, under the rationale that the king had virtually unlimited discretionary power when he was acting for the “general benefit of the people.”

The "public welfare" is an elastic concept which justified virtually limitless expansion of the police power.

The concept of the “public interest” or “common good” being paramount to private interests, unfortunately, persisted in American political thought and in American law. One consequence was a hostile attitude toward commerce and commercial activities that long has been part of American culture but which, too, was incompatible with a capitalist, "free enterprise" economy. Another consequence was an ambiguity inherent in the definition of the “police power,” the general regulatory power vested in state legislatures to pass laws limiting individual freedom and property rights. Traditionally, the police power was exercised to protect public health, safety, and morals. Courts and legal commentators in the nineteenth century justified the exercise of the power in terms of the old common-law principle of nuisance, which limited uses of one’s property that were harmful to other persons or the general public. The scope of the police power, however, “proved incapable of precise delineation,” in the words of a modern legal scholar. Not only were the traditional categories of public health, safety, and morals ill-defined, but the courts added new categories—including, by the early twentieth century, the category of “public welfare,” the elastic concept which justified virtually limitless expansion of the police power.

The rise of industrial capitalism in the late nineteenth century, during the several decades following the end of the Civil War, was accompanied by a growth in government regulation of business, at both the state and federal levels, under expansive definitions of the states' "police power" and Congress's power to regulate interstate commerce. Not surprisingly, the railroad industry was the first major industry in the United States subjected to regulation by government commissions, first at the state level and then at the federal level with the passage of the Interstate Commerce Act in 1887.

The Supreme Court, in a series of decisions beginning in the 1870s, sanctioned this expanded role of government by applying the old, seventeenth century English concept of "public interest"—particularly, "business affected with a public interest"—to undercut the constitutional safeguards given property and economic liberty through the due process clauses of the Fifth and Fourteenth Amendments. For example, in the early landmark case of Munn v. Illinois, the Court uphold an Illinois law, passed at the behest of the farmers’ association known as the Grange, which set maximum rates that grain elevators could charge in Chicago. Citing seventeenth century English precedents, the majority of the justices held that the law was a legitimate exercise of the police power, under the rationale that the storage of grain (in elevators owned by railroad companies) was a “business affected with a public interest.” Although the Court in a series of decisions during the first three decades of the twentieth century tried to delineate the scope of this concept, by the mid-1930s the majority of the justices concluded that there was “no closed class or category of businesses affected with a public interest,” thus opening the floodgates to all sorts of government regulation, including the licensing of a wide variety of occupations.


The rise of "trusts"—business combinations, such as holding companies, designed to enhance efficiency—was a response by businesses to the intense competition that characterized most major American industries in the late 19th century. Populists and other proponents of bigger government during the so-called "Progressive" era often exploited the public's fear of big business in making the case for their political programs. Responding to American public opinion—which was profoundly distrustful, indeed paranoid, about "big" business—as well as political pressure from various special interest groups, Congress passed the Sherman Antitrust Act in 1890, allegedly to "protect" competition from the supposed threats of the trusts. Unfortunately, when it passed the Sherman Act, Congress deliberately used vague terms such as monopoly and restraint of trade, the meaning of which were undergoing substantial changes in popular and legal culture at the time. Thus Congress left for the courts the crucial task of interpreting the provisions of the law and so determining precisely what sort of business practices it made criminal.

Antirust law subjected American businessmen to vague legal standards.

Antitrust law, together with the law of unfair trade practices, subjected American businessmen in the twentieth century to vague legal standards, under which entrepreneurs may be penalized for being too effective, or too good, as competitors. Consider, for example, the problem of pricing one's goods or services. Ayn Rand only slightly exaggerated the dilemma that the antitrust laws created when she described it this way:

    If [a businessman] charges prices which some bureaucrats judge as too high, he can be prosecuted for monopoly, or, rather, for a successful “intent to monopolize"; if he charges prices lower than those of his competitors, he can be prosecuted for “unfair competition” or “restraint of trade”; and if he charges the same prices as his competitors, he can be prosecuted for “collusion” or “conspiracy."

Rand also aptly described the precarious position in which the law leaves American businessmen:

    This means that a businessman has no way of knowing in advance whether the action he takes is legal or illegal, whether he is guilty or innocent. It means that a businessman has to live under the threat of a sudden, unpredictable disaster, taking the risk of losing everything he owns or being sentenced to jail, with his career, his reputation, his property, his fortune, the achievement of his whole lifetime left at the mercy of any ambitious young bureaucrat who, for any reason, public or private, may choose to start proceedings against him.

Essentially the same criticism has been made by modern economists critical of the antitrust laws.

A notorious example of the injustice of antitrust law from the turn of the last century involved the man who probably was the real-life model for Nathaniel Taggart: James J. Hill, founder of the Great Northern Railroad Company, the only major transcontinental line built entirely by private capital, without federal land grants or other government subsidies. When Hill created the Northern Securities Company, a holding company combining his and his partners' railroads into a larger company in order to avert a takeover attempt by the Harriman interests who controlled the Union Pacific, the Company was immediately targeted by President Teddy Roosevelt's "trust-busting" campaign. The Justice Department brought suit under the Sherman Act; and the Supreme Court, in a 5-4 opinion written by Justice Harlan, found the Company in violation of the Act as a "restraint of trade," even though the creation of the Company in fact had enhanced competition.

Another oft-cited example is that of ALCOA, found guilty of antitrust violations in the 1945 case, United States v. Aluminum Company of America, because, in the words of Judge Learned Hand in his opinion for the court, the company produced more of its product to meet the public demand:

    It [ALCOA] insists that it never excluded competitors; but we can think of no more effective exclusion than progressively to embrace each new opportunity as it opened, and to face every newcomer with new capacity already geared into a great organization, having the advantage of experience, trade connections, and the elite of personnel.

Thus has antitrust law been used in the twentieth century to penalize, for their ability, men of magnificent productive achievement: whether James J. Hill at the beginning of the century, or men such as Bill Gates today.

Application of the antitrust laws to Gates’ company, Microsoft, in recent years has prompted many commentators to question particularly the application of antitrust laws to high-technology industries. The Microsoft case, moreover, has prompted not just academics, but also “mainstream media” commentators to question the wisdom of the antitrust laws generally.

Ayn Rand was a good student of American business history. The world she portrayed in Atlas Shrugged , of course, exaggerated this fatal flaw in the law – but only slightly. As she said in her 1964 lecture "Is Atlas Shrugging?", "the principles of every edict and every directive presented in Atlas Shrugged—such as `The Equalization of Opportunity Bill' or `Directive 10-289'—can be found, and in cruder forms, in our antitrust laws."


The rise of the twentieth century regulatory/welfare state also can be explained in terms of the failure of the Constitution, as interpreted by the U.S. Supreme Court, to curb the power of government, particularly the federal government, and to safeguard individual rights, particularly property rights and economic liberty. Although the Court’s sanctioning of broad federal regulatory power over business can be traced to a series of cases in the late nineteenth century and the early decades (the so-called “Progressive era”) of the twentieth century, the significant shift in the Court’s interpretation of key constitutional provisions occurred in the so-called “New Deal revolution” of the late 1930s. Prior to a series of landmark decisions in 1937, the Court had protected economic liberty and property rights as part of the “liberty of contract” it had recognized as a fundamental right safeguarded by the due process clauses of the Fifth and Fourteenth Amendment, against federal and state regulatory laws that went beyond the traditional confines of the police power. The Court also had applied the Tenth Amendment—which reserves powers not granted to the federal government to either the states or to “the people”—to limit the reach of Congress’s powers to regulate interstate commerce and to spend the money it collected through federal taxes. After 1937, the Court ceased to protect liberty of contract as a fundamental right; it also allowed Congress to exercise broad, virtually unlimited, powers to regulate commerce and to spend money—upholding, among other things, federal labor laws and the Social Security Act.

After 1937, the Court ceased to protect liberty of contract as a fundamental right.

The Supreme Court’s post-1937 “liberal” constitutionalism generally has meant not only that Congress has virtually unlimited powers to regulate business but that a double standard exists in the Court’s protection of individual rights. Those “preferred freedoms,” which is to say those rights that left-liberal judges value most—First Amendment freedom of speech and the press, certain rights of accused persons under the Fifth and Sixth Amendments, the Eighth Amendment’s prohibition of “cruel and unusual” punishments, and the unenumerated “right to privacy”—have been broadly protected, as fundamental rights, against laws that lack a “compelling” governmental interest to justify limiting individual freedom. On the other hand, those rights not favored by left-liberal judges—including economic liberty and property rights—have been given minimal, if any, constitutional protection; these rights can be restricted by any laws meeting the modern Court’s minimal “rational basis” test—that is, any government regulation deemed “reasonable in relation to its subject” and “adopted in the interests of the community.” Under this broad standard, virtually all kinds of government regulation of business have been upheld by the courts against constitutional challenges.

The Court’s failure to protect economic liberty and property rights against expanding governmental powers can be explained in various ways: for example, as a result of changes in personnel on the Court, or as a result of the justices’ historical tendency to give little regard for individual rights, generally. In a 1973 essay, Ayn Rand offered an especially insightful explanation of the Court’s failure to protect individual rights, when she found that the justices generally were guilty of “context-dropping”—that is, of failing to appreciate the importance of context—in interpreting the Constitution. One could say that it is not just justices on the Supreme Court but also other judges, lawyers, legal scholars and commentators—indeed, virtually all players in the modern debate over constitutional interpretation—who have failed to take a contextual view of the Constitution and its essential function, to protect individual rights.


To be sure, Atlas Shrugged portrays America in decline, as the inevitable consequence of its “mixed economy.” But the significance of the novel goes far beyond its critique of the modern regulatory/welfare state. Rand herself noted that the story of Atlas Shrugged "demonstrates that the basic conflict of our age is not merely political or economic, but moral and philosophical," the conflict between "two opposite schools of philosophy, or two opposite attitudes toward life": what she called the "reason-individualism-capitalism axis" and the "mysticism-altruism-collectivism axis." That conflict is at the heart of the basic contradictions in American law and constitutionalism discussed in the previous sections.

To resolve the conflict, and to place the Founders' "new science of politics" upon a firm philosophical footing—and thus to complete the work of the American Revolution—we need not only to reaffirm the Founders' commitment to individual rights but to ground that commitment in a coherent theory of rights. Constitutional protections of life, liberty, and property have been proven insufficient to guard individuals from the tyranny of the so-called "common good" or the "public interest"; we must realize, as clearly and as fully as Rand did, that there is no such thing, that it is an undefined and an indefinable concept, and that this "tribal notion" indeed "has served as the moral justification of most social systems—and of all tyrannies—in history."

By presenting a new code of ethics—the morality of rational self-interest—Rand’s novel helps to provide what the Founders failed to grasp, the missing element of the American Revolution: the moral justification of capitalism, and with it, of the rights of all persons—including the American businessman. Although Atlas Shrugged outlines the essential principles of Objectivism as a philosophical system, the format of a novel—even one as philosophical as Atlas – has inherent limitations. As David Kelley, founder of The Atlas Society, has observed, the complete development of a new philosophy, particularly one based on reason as Objectivism is, requires much work by many thinkers. Like the American Revolution, Objectivism is incomplete: among the many areas where gaps or inconsistencies appear in Rand’s presentation of the philosophy, not only in Atlas Shrugged but in her subsequent non-fiction works, are many of the areas most relevant to the completion of the American Revolution: political philosophy and philosophy of law. Among other things, a comprehensive theory of rights (particularly of constitutional rights, or rights against the government) and a contextualist theory of constitutional interpretation need to be developed. The Constitution of the United States needs to be rediscovered, not just as it was meant to be understood by its framers but also as the text of the document calls for, as a limitation on the powers of government and a safeguard of individual rights. To fully protect property rights and all aspects of the basic right to liberty, including economic liberty, it might even be necessary to add such provisions to the text as the amendment suggested by Judge Narragansett, in the concluding section of Atlas Shrugged: “Congress shall make no law abridging the freedom of production and trade.”

To complete the American Revolution, much work has yet to be done. Thanks to Ayn Rand’s magnificent novel, however, we can identify the path along which we must travel to reach that destination. As John Galt states in the closing lines of the novel, “The road is cleared.”

Editor’s Note: This essay expands upon the paper that the author presented at The Atlas Society’s celebration of the 50th anniversary of the publication of Ayn Rand’s Atlas Shrugged , held in Washington, D.C. on October 6, 2007. This essay first appeared in the Spring 2008 edition of the Journal of Ayn Rand Studies. Copyright © 2008 David N. Mayer.

David N. Mayer
About the author:
David N. Mayer
Atlas Shrugged
Law / Rights / Governance
Ayn Rand's Ideas and Influence