George Mason’s refusal to sign the Constitution over the lack of a Bill of Rights still matters because it reminds us a powerful new institution must have guardrails before it reshapes society. Artificial intelligence is now that institution, seeping into culture, commerce, and public life and demanding the same hard questions about freedom and who controls truth. This piece argues that AI should be built with clear protections for liberties, transparency about how it works, and strong resistance to ideological pressure.
When Mason withheld his signature in 1787 he forced a debate that produced the Bill of Rights. He understood that a strong system needs explicit limits on power to protect individual freedom. That lesson fits today: powerful systems deserve explicit rules that protect citizens from unseen control.
Artificial intelligence is already woven into banking, education, healthcare, and news feeds, quietly steering what people see and how decisions get made. These systems promise real gains — faster research, better diagnostics, and productivity boosts — but they also shape the informational environment we rely on. When those choices are made without accountability, the results can erode trust and freedom.
AI is not neutral. Every system reflects choices made by designers about data, filters, and objectives. Those choices determine which facts get amplified, which viewpoints are suppressed, and what counts as acceptable discourse. That means several groups — companies, researchers, and engineers — are effectively designing the default public square.
We already see the consequences. Multiple studies have found ideological slants in high-profile models, and there have been notable cases where image and text generators produced inaccurate or sanitized outputs to fit modern expectations. Algorithms on social platforms routinely boost certain content and bury other perspectives, shaping what millions perceive as reality. In some countries, state-directed models actively redirect or avoid topics that clash with official narratives.
Put plainly, AI can either broaden freedoms or become a tool for managing public opinion. The steering happens through engineering choices and corporate policy, not by accident. If left unchecked, these systems could lock in preferences that favor certain political or cultural perspectives over others.
So what should happen next? First, developers and platforms should prioritize truth-seeking over narrative control, designing models to inform rather than to push users toward predetermined conclusions. Second, there should be transparency about training data and major design choices so the public can judge how outputs are shaped. Third, designers must resist pressure from governments and powerful corporations to suppress lawful speech under vague claims of safety.
Those principles are not a call for heavy-handed censorship or centralized oversight; they are a demand for clarity, accountability, and fidelity to free speech. Private firms, academic labs, and civic organizations should be held to public standards that protect dissenting views and make manipulation harder. Markets and civil society can enforce better behavior if rules are clear and companies are accountable.
George Mason refused to sign the Constitution because he believed liberty needed stronger protection before a new federal government was enacted. His insistence on a Bill of Rights helped ensure that the American experiment would endure longer by providing explicit protections for individual freedom.
If artificial intelligence is going to help shape the future of our society in profound ways, should it not also be built to respect the same freedoms that Americans have fought for since the founding of the republic? That question matters because once this technological infrastructure is embedded, reversing biased incentives and hidden filters will be far harder. The test now is whether we demand guardrails proportional to the power being handed to machines and the institutions that run them.
