New York Wants to Muzzle AI Because Some People Refuse to Think

Alan Marley • March 12, 2026
New York Wants to Muzzle AI Because Some People Refuse to Think — Alan Marley
Technology & Policy

New York Wants to Muzzle AI Because Some People Refuse to Think

When government decides the answer to bad information is less information, the people who pay the price are not the reckless ones. They are everyone else.

This New York proposal is a bad idea. Not because bad AI outputs do not exist. They do. Anyone who has used AI extensively knows it can be wrong, overconfident, shallow or occasionally absurd. But the answer to that problem is not to treat adults like helpless livestock who must be shielded from information unless it passes through a licensed gatekeeper. That is paternalism dressed up as consumer protection and New York State Senator Kristen Gonzalez's bill, S7263, is a clean example of the species.

The bill would impose liability when a chatbot gives what it calls "substantive" responses or advice in areas where a human doing the same thing would need a professional license. It also specifies that a company cannot escape that liability simply by disclosing to users that they are talking to an AI. Users could sue for damages and in willful violation cases recover attorney's fees. The bill remains in committee as of March 2026.

The stated rationale is preventing chatbots from impersonating licensed professionals. That sounds reasonable until you read what the bill actually says.

The Language Is the Problem

The sponsor memo says the bill would prohibit chatbot operators from providing "any substantive response, information or advice" that, if given by a human, would constitute unauthorized practice under sections of New York law tied to licensed professions. The impersonation framing dissolves on contact with that language. This is not targeted at a chatbot claiming to be your attorney. It is targeted at substantive responses to questions in domains that happen to have licensing requirements.

What counts as substantive? What crosses the line from information into advice? If someone asks an AI whether a roof truss span looks undersized for a given load, is that education or engineering advice? If someone asks whether chest pain combined with shortness of breath warrants emergency attention, is that health information or medical advice? If someone asks how to respond to a demand letter, is that legal literacy or unauthorized legal counsel?

The Ambiguity Is Not a Bug

Vague liability standards do not produce precision. They produce over-censorship. When the line between permitted and prohibited is unclear, companies do not try to find it. They retreat well behind it. The result is not careful enforcement of a sensible rule. It is neutered software that refuses to engage meaningfully with any topic adjacent to a licensed profession, which is a very large portion of the questions ordinary people actually need answered.

Add private lawsuits for damages and attorney's fees for willful violations and you have created a litigation incentive structure that rewards legal ambiguity. The murkier the standard the more leverage for plaintiffs. The more leverage for plaintiffs the more companies preemptively sanitize their products. This is how a bill ostensibly aimed at edge cases produces broad censorship as its practical output.

This Is About Gatekeeping, Not Safety

Let's be honest about what is actually happening. Access to information is not the same thing as professional representation and much of what gets swept under licensing law is information exchange that human beings have always conducted freely. Explaining what a contract clause means in plain English is not unauthorized legal practice. Describing symptoms associated with dehydration is not unauthorized medical advice. Discussing general structural principles is not unauthorized engineering. People have always explained things to other people. AI is a faster and more searchable version of something that has existed as long as language has.

That is precisely why regulators find it threatening. It disrupts the old tollbooth model. It lets ordinary people acquire preliminary understanding before deciding whether they need a professional. It lowers the cost of basic comprehension. It redistributes informational power away from institutions that have enjoyed asymmetric access to knowledge for decades. The emotional pitch is safety. The practical mechanism is control over who gets to explain things and under what conditions.

"Nobody serious argues that surgery, courtroom representation or structural sign-off on a public building should be unregulated. But there is an enormous difference between professional practice and information exchange, and conflating the two is not an oversight. It is a strategy."

Adults Are Supposed to Use Judgment

Here is the part too many legislators will not say plainly: a great deal of harm in the world comes from people making bad decisions with or without any AI involvement. Before chatbots existed, people took medical guidance from cousins, vitamin grifters, chain emails, daytime television and the loudest person at the table. They still do. The internet did not invent gullibility. It merely accelerated it.

The question is whether the correct response to the existence of credulous people is to degrade the tools available to everyone else. That is backwards. The correct response to AI is the same response competent adults have always applied when stakes are high: verify, compare, cross-reference and escalate to a qualified professional when the decision warrants it. A person who asks a chatbot whether to ignore crushing chest pain and then blames the technology when that goes badly has not identified a problem with AI. They have identified a problem with their own judgment and no legislation is going to solve that.

Government cannot stop Darwin from sorting things out. It can only inconvenience everyone else in the process.

The People Who Actually Pay the Price

Bills like this are marketed as strikes against powerful technology companies. The burden lands primarily on ordinary people. A wealthy person can call an attorney, retain a specialist physician or hire a licensed engineer on demand. The person running a small business who needs to understand a lease clause before deciding whether to pay counsel to review it cannot. The homeowner who wants to understand drainage and framing principles before talking to a contractor cannot. The patient trying to organize symptoms and questions coherently before a medical appointment cannot afford to pay a professional every time they want to think a problem through in advance.

Once liability standards become vague and aggressive, the corporate calculation is simple. The safest product is one that refuses to engage. The result is reduced access for the average person and increased dependence on licensed intermediaries who charge for the privilege. That is not consumer protection. It is the re-feudalization of information, dressed up in the language of safety and concern for the vulnerable.

There Is a Reasonable Version of This Concern

There is a legitimate and far narrower argument available here. Chatbots should not represent themselves as licensed human professionals. Disclosure requirements making clear that an AI system can be wrong and should not substitute for expert judgment in high-stakes situations are sensible and defensible. New York already has a related bill in the same legislative orbit requiring AI operators to warn users that outputs may be inaccurate. That is honest and proportionate.

What the bill actually does goes considerably further and the gap between what it claims to target and what its language would actually capture is where the damage happens. Telling users what a system is and what its limitations are is honest communication. Creating litigation exposure for "substantive responses" to questions touching licensed fields is a mechanism for making useful tools progressively less useful until ordinary people give up and pay a professional instead. Those are not the same thing and pretending they are is the central evasion in the bill's defense.

My Bottom Line

AI is not a licensed surgeon, a trial attorney or a structural engineer and nobody rational claims otherwise. Expertise matters. Credentials exist for reasons. Real-world judgment built through professional practice is not replaceable by a language model in a high-stakes situation and serious people know that. But there is a substantial distance between that true statement and the conclusion that AI should be legally constrained from providing robust answers because some users might take them too seriously. The answer to imperfect tools has always been competence, not silence. The answer to bad speech has always been better judgment, not enforced helplessness.

A free society should prefer informed citizens to dependent ones. It should prefer broad access to knowledge over knowledge rationed through institutional choke points. It should expect adults to verify important information rather than legislating around the assumption that they will not. If lawmakers keep regulating toward the least competent possible user, they punish every competent user in the process. That is a bad trade and New York's S7263 is a clear example of exactly how it gets made.

References

  1. New York State Senate. (2025). S7263: Imposes Liability for Damages Caused by a Chatbot Impersonating Certain Licensed Professionals.
  2. New York State Senate. (2026, March 6). NY State Senator Kristen Gonzalez on Her Bill to Address AI Chatbots Impersonating Licensed Professionals.
  3. New York State Senate. (2026, January 29). State Senator Kristen Gonzalez Introduces Bill to Protect Minors from AI Chatbots, in Partnership with Attorney General Letitia James.

Disclaimer: The views expressed in this post are the personal opinions of the author and are offered for educational, commentary and public discourse purposes only. They do not represent the positions of any institution, employer, organization or affiliated entity. Nothing in this post constitutes legal, financial, medical or professional advice of any kind. References to public figures, institutions, historical events and current affairs are based on publicly available sources and are intended to support analysis and argument, not to state facts about any individual's character, intent or conduct beyond what the cited sources support. Commentary on religious, political and cultural subjects reflects the author's independent analysis and is protected expression of opinion. Readers are encouraged to consult primary sources and form their own conclusions. Any resemblance to specific individuals or situations beyond those explicitly referenced is coincidental.