Progress Requires Participation: Why Writers Should Engage With AI, Not Reject It

In my last piece, I told you about a writing coach who declared that AI-assisted writing is “plagiarism.” She wasn’t alone. The literary world is full of gatekeepers raising alarms about artificial intelligence. And to be fair, some of that concern is justified. But there’s a deeper tension worth examining: many of the same voices that champion progress and inclusivity are resisting a tool that could expand both.

I’m still learning how to use AI as a writing partner. I’m experimenting with prompts, testing boundaries, and figuring out what works. I don’t claim expertise—almost none of us can, yet. That’s part of what makes this moment unusual. We’re all learning at the same time. If you’ve been curious but hesitant, you’re not behind. You’re right on time.

I understand the resistance because I’ve felt its impact personally. AI has already cost me ghostwriting work. The field is shifting quickly, and many skilled writers are watching familiar roles change or shrink. That fear—of becoming less needed, less valued—is real. It deserves acknowledgment, not dismissal. But rejecting the technology won’t stop that change. If anything, it risks leaving thoughtful voices out of shaping what comes next.

Some critics argue with certainty that any use of AI is plagiarism. They frame AI-assisted writers as lazy or inauthentic. But that claim oversimplifies both writing and the technology itself. Many of us have already been collaborating with machine learning tools for years—grammar checkers, predictive text, editing software—just under names that felt more acceptable. The line isn’t as clear as it’s often made out to be.

Understanding how AI works helps cut through some of that fear. Large language models don’t copy and paste existing work. They learn patterns from vast amounts of text—how words relate, how ideas connect, how tone and rhythm shape meaning. When given a prompt, they generate new language by recombining those patterns. It’s synthesis, not replication. A useful analogy is jazz: a musician improvising within a tradition isn’t plagiarizing it. They’re building on it. AI-assisted writing works in a similar way.

Legal debates have also fueled confusion. In Bartz v. Anthropic, critics pointed to the case as evidence that AI training is inherently theft. But the core issue was the use of pirated books from illegal sources. The court drew a distinction: training on legally acquired materials was considered fair use—transformative in nature. The ruling didn’t condemn the technology; it clarified how it should be developed responsibly. The takeaway is straightforward: ethical implementation matters.

So why the intensity of the reaction? Because AI changes who gets to participate. It lowers barriers that have traditionally limited access to publishing, education, and professional networks. Someone without formal credentials or connections can now produce polished, compelling work and reach an audience directly. That kind of shift can feel destabilizing, especially in fields where expertise and gatekeeping have long been intertwined.

And yes, there are valid concerns about low-quality output, misinformation, and overreliance on automation. Those problems are real. But excluding people from the tools isn’t the solution. Broadening participation is. The more diverse the group engaging with AI—writers, editors, teachers, critics—the better these systems become. Quality improves when more people bring discernment to the process.

That’s why this isn’t a dismissal of gatekeepers—it’s an invitation. The experience, standards, and ethical frameworks developed over years of practice are not obsolete. They’re needed. AI doesn’t replace judgment; it makes human judgment more important. Writers who understand nuance, voice, and meaning are uniquely positioned to guide how these tools are used.

Choosing not to engage is a valid personal decision. But collectively, stepping away creates a vacuum—and that vacuum will be filled by others, not all of whom will share the same values or care for craft. Participation is how influence is maintained.

Much of the hostility around AI is rooted in fear. That’s human. But progress has always required moving through fear, not organizing around it.

As Jeff Burningham recently argued, the future won’t belong to those who simply produce more—it will belong to those who bring wisdom, judgment, and depth. AI may accelerate output, but it can’t replace lived experience, perspective, or genuine human connection. Those remain the foundation of meaningful writing.

The question isn’t whether AI belongs in writing. It’s already here. The real question is who will help shape how it’s used—and whether the people who care most about language and integrity will choose to be part of that process.

If you’ve been hesitant, consider this an open invitation: step in. Question it. Challenge it. Improve it. The future of writing will be stronger if more of us help build it.

Next
Next

From Gatekeepers to Grok: The Writing Revolution No One Wants to Admit