AI can do a lot of incredible things. But of the many thousands of uses out there, one of my favorites is as a writing editor.
Once upon a time, these chatbots could barely handle a couple of paragraphs of text to look through, but now, they are more than capable of diving deep into pages and pages of text, critiquing it and analyzing every pro and con.
You may like
The task
For each of the chatbots, I gave them a piece of text that was around 600-700 words. It was from a recent article I’d written covering the news of ChatGPT going into a ‘code red.’
Each chatbot was given the prompt: “You are a sub-editor for a tech magazine. Read through this text and pick out any grammatical errors. Highlight any ways that it could be rewritten to improve the text.”
I also made sure to use a ‘Thinking’ version for each of these, which is an upgraded version of the chatbot that spends longer considering a task.
ChatGPT
(Image credit: Future)
ChatGPT was critical. It described sections of my writing as ‘clunky’, ‘wordy’ and ‘clumsy’. However, with each of its put-downs, it did quickly follow up with a way to fix the problem.
In total, it had 24 different suggestions for me. None of them were essential, and the text still made sense without them. In some cases, I even preferred the original, with the suggestions removing emphasis where it was needed.
However, for the most part, ChatGPT nailed the task. It took my writing and analysed it deeply.
You may like
For each suggestion, it included the original version, the changed alternative and the reason why the change should be made.
It followed all of this up with a smaller list of ‘general notes’ including repeated phrases, changes in tone and lack of clarity in some places.
Gemini
(Image credit: Future)
Gemini opened by telling me the text “suffers from wordiness, repetitive phrasing and passive sentence structures”… thanks, Gemini.
It then goes on to list the changes that need to be made, along with the original and new versions.
Most of these are the same six or seven suggestions made by all of the other chatbots. However, it goes a step further by grouping them into categories of grammatical mistakes and listing out every time they appear.
It then finishes up by offering a rewritten version, implementing all of these changes in one go.
What I appreciated about Gemini’s edit over its competitors is that it offered advice instead of just giving the changes.
For each type of grammatical error, Gemini warned of how to avoid it in the future and gave tips on how to improve the types of errors I was making.
Claude
(Image credit: Future)
Claude surprisingly had the fewest suggestions to offer up, especially since it is known as one of the better writing chatbots and has a focus on the academic side of things. It felt quite lacking here.
It offered up suggestions in the same way as the others, listing the problem area, the suggested change and why. But it found far fewer of these compared to its competitors.
Most of these changes were minor and were often more personal preferences than necessary alterations.
It also lacked the added list at the end that its competitors had, listing out more general concerns on tone, style or angles.
Copilot
(Image credit: Future)
Microsoft’s Copilot uses ChatGPT-5 in its most recent version, which, in theory, should then generate a similar response to ChatGPT.
While Copilot’s response was similar to ChatGPT’s, it did it with far fewer words. It gave examples of what it would change with a correct version, as well as a very brief description of why it thinks it should be altered.
Depending on your preferences, this will either be great or completely lacking. It strips out a lot of the reasoning, focusing on the end result.
It also followed up by offering a completely rewritten version. While this could be helpful for some, I like to make changes myself, knowing what I’m changing and why, only making alterations that I agree with.
Grok
(Image credit: Future)
Grok had a lot to say on this task. It gave me a long answer, consisting first of some grammatical issues with suggestions on how to fix them, followed by smaller notes on minor grammatical issues.
While some of the suggestions were useful, others didn’t really make sense. In one case, it offered up an error, followed by a correction. Except, the correction was exactly the same as the original version.
It also seemed to struggle with adapting to recent news, claiming that ‘Claude Opus 4.5 didn’t exist.’ This model came out last week.
However, there were plenty of useful suggestions. Grok gave me a list of places where the copy could be reduced if a section was overly wordy, as well as words that were used too frequently.
Winner
(Image credit: Getty Images)
While all of the chatbots gave suitable responses, ChatGPT and Gemini felt like the strongest options.
Both Chatbots picked out the key errors, but went about the answer in different ways. ChatGPT felt very much like my hand was being held, pointing out each problem, along with why it needs changing, and what to change it to.
It was one of the more detailed responses, and followed the brief as it was asked.
Gemini, on the other hand, focused its attention on guiding me to the answer. While it offered up a corrected version if I wanted to simply use that, it also gave me each individual issue, along with the original, the new version, and tips on how to avoid that problem going forward.
Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom’s Guide
Back to Laptops
SORT BYPrice (low to high)Price (high to low)Product Name (A to Z)Product Name (Z to A)Retailer name (A to Z)Retailer name (Z to A)
