Writing Assistant
Found sentences that look AI-generated? Drop them into the Writing Assistant. It rewrites in your voice and targets the same filler phrases and flat rhythm that tripped the detector.
Paste any text. You'll get a score for every sentence and a verdict for the passage as a whole. Hover any highlighted line to see what tripped its score.
The detector reads your text with a language model. That's the same kind of system that writes AI content, so it recognises the patterns from the inside. Every sentence gets a 0–100 score, and those scores roll up into one verdict for the passage.
We process your text in memory and throw it away the moment the result comes back. We don't store anything. We don't use anything you paste for training.
Treat it as a strong signal, not proof. We tuned the model to err on the side of "human", because a false accusation is worse than a missed catch. Formal academic writing tends to score high. Its register shares features with AI prose, which throws the overall number off. The sentence breakdown matters more. That's where you can see why something tripped.
Yes. The detector looks at writing patterns, not model fingerprints. GPT-4, Claude, Gemini, Llama, Mistral all share the same tells. We can't tell you which one wrote a given passage, only how AI-like the passage reads.
The same properties that make a paper read as scholarly (formal register, hedging, consistent structure) are also what AI models default to in any register. The detector knows about this overlap and discounts burstiness and hedging when the text reads academic. Other signals still apply. If your draft really was AI-generated, the filler phrases and uniformly flat tone usually give it away even with that calibration in place.
If you wrote the draft and asked an AI to polish a few sentences, you'll usually land in "Mixed". If the AI did most of the writing and you tweaked a few words, expect a score above 65. The sentence highlights show you which lines tripped the detector, so you can rewrite those specifically instead of touching everything.
No. Your text lives in memory while the request runs and is gone once we send the result back. We don't store results. We don't use anything you paste for training. The privacy policy spells this out in more detail.
Aim for at least 100 words. A sentence or two won't carry enough signal for a stable score. A full paragraph is plenty. There's no upper cap; we've run chapter-length texts through it, they just take a few extra seconds to come back.
GPTZero and Turnitin train classifiers on labelled human-vs-AI datasets. Our detector works differently. It hands your text to a language model and asks it to reason about which sentences look generated, then returns scores per sentence. The two approaches catch different things. For anything high-stakes (academic integrity hearings, legal cases), don't rely on one detector alone. Run it through a couple, and look at the sentence-level evidence rather than the headline number.
Found sentences that look AI-generated? Drop them into the Writing Assistant. It rewrites in your voice and targets the same filler phrases and flat rhythm that tripped the detector.
AI tools invent citations that look plausible but don't exist. Run your reference list through the Citation Checker and it'll flag the fakes.
If the AI made a claim with no citation, paste it into Find Source. It searches Crossref, Semantic Scholar, and arXiv for real papers that back the claim up.
AI-powered features require an account. The Citation Generator stays free forever.