The most retarded AI failure is LanguageTool, not the chatbots
I am so sick of the inconsistency of LanguageTool that I asked ChatGPT about my experiences with LT and, to my surprise, it said that it’s a known issue and that this is even by design! OMFG. Here’s the conversation I had.
❶ Ludditus:
With the exact same settings and the same account, I get 3 different sets of corrections in LanguageTool:
- In the Windows app.
- In the web app in Firefox.
- In any field corrected by its Firefox extension.
By this, I mean that some errors are only corrected in one of the 3 ways of using LT. I cannot explain this behavior.
ChatGPT:
This is a known and expected behavior with LanguageTool. Using the same account and nominally the same settings does not guarantee identical correction results across its clients. The differences come from architecture, not from your configuration.
Below are the concrete reasons, in order of impact.
1. Different engines and rule sets per client
LanguageTool does not run the exact same checking stack everywhere.
- Web app
Uses the full server-side engine:
- Complete rule set (XML rules + premium rules)
- Full neural models
- Context-aware and paragraph-level checks
- Windows desktop app
Usually runs a hybrid model:
- Some checks are local
- Some are server-side
- Certain heavy or context-dependent rules are skipped or simplified
- Firefox extension
Is the most constrained:
- Aggressive rule filtering for performance
- Shorter context window
- Some rules explicitly disabled to avoid false positives in form fields
Even when rules have the same name, they are not always enabled or executed the same way.
2. Different context boundaries
Many LT rules depend on context length.
- Web editor: full document context
- Windows app: large but sometimes segmented context
- Browser extension: field-limited context
If a rule requires:
- previous sentence
- paragraph-level agreement
- discourse markers
…it may trigger in the web app but not in the extension, even with identical text.
3. Different text preprocessing
Each client normalizes text differently before it reaches the checker:
- Line breaks vs. wrapped lines
- Hidden Unicode characters
- Non-breaking spaces
- Smart quotes / punctuation normalization
- HTML stripping (web/extension only)
Two texts that look identical are often not byte-identical, which affects rule activation.
4. Asynchronous rule updates
Rules are updated independently:
- Web app: updated immediately
- Desktop app: updated with app releases or delayed sync
- Extension: updated with extension versions and browser policies
Result:
At any given moment, you are likely running three different rule versions.5. Performance-based rule suppression
In extensions especially, LT deliberately suppresses:
- Expensive neural rules
- Rules with historically high false-positive rates
- Rules sensitive to partial text edits
This suppression is dynamic and undocumented.
6. Premium rules are not uniformly applied
Even with a Premium account:
- Some premium rules are web-only
- Some are disabled in extensions
- Some are only enabled for longer texts
This is by design, not a bug.
Bottom line
What you are seeing is not misconfiguration and not account-related.
LanguageTool is effectively three different products sharing:
- the same login
- a partially overlapping rule base
Identical output across clients is not achievable.
Practical conclusion
If you want:
- Maximum coverage and consistency → use the web editor
- Inline convenience → accept reduced coverage (extensions)
- Offline-ish checking → accept hybrid behavior (desktop app)
Your observation is correct; the behavior is inherent to LT’s architecture.
❷ Ludditus:
C’mon, a rule as simple as to where to put a comma is so complicated that it cannot be applied consistently in a small paragraph across 3 usages?
ChatGPT:
Short answer: yes, and the reason is not the comma rule itself.
A “comma rule” in LanguageTool is almost never a single, unconditional rule. It is typically implemented as a syntactic rule that depends on parsing, and that is where consistency breaks.
Here is what actually happens.
1. Comma placement is parser-dependent
Even “simple” comma rules usually rely on:
- part-of-speech tagging
- clause boundary detection
- dependency parsing
Example (English, but the logic applies to most languages):
Before leaving, he locked the door.
The comma is inserted because the parser decides:
- “before leaving” = subordinate clause
- clause boundary = true
If any of the following differ, the rule may not fire:
- POS tag ambiguity (
leavingnoun vs verb)- clause attachment choice
- sentence boundary detection
LanguageTool does not guarantee the same parser is used in all clients.
2. Different clients use different parsing depths
- Web editor
Full dependency parsing, multi-pass analysis.- Windows app
Often shallow parsing or cached parses.- Firefox extension
Reduced or heuristic parsing to avoid latency.A comma rule may be enabled everywhere, but the precondition (“this is a subordinate clause”) is not always satisfied.
So the rule is “present” but never triggered.
3. Short paragraphs are a worst-case scenario
Ironically, small paragraphs are harder, not easier:
- Less context for disambiguation
- More POS ambiguity
- Higher chance the parser falls back to heuristics
That is exactly when different clients diverge.
4. “Same settings” does not mean “same grammar mode”
LanguageTool silently switches modes depending on client:
- “formal”
- “short text”
- “interactive typing”
Comma rules may be:
- stricter in document mode
- relaxed in typing mode
- suppressed during live edits
The Firefox extension almost always runs in typing mode.
5. Language-specific comma rules are especially fragile
If you are working in a language with:
- optional commas
- stylistic commas
- rule conflicts (e.g. German, Romanian, French)
then LT often marks comma rules as low confidence and enables them only in certain contexts.
Those contexts differ per client.
The key misconception
“A comma rule is simple.”
From a linguistic perspective, maybe.
From a grammar-checker perspective, it is one of the hardest classes of rules because it sits on top of parsing, not pattern matching.Spelling rules are trivial.
Agreement rules are moderate.
Comma rules are high-level syntactic decisions.Why this feels unacceptable (and you’re not wrong)
From a user standpoint:
- Same text
- Same account
- Same settings
→ same result is a reasonable expectation.
LanguageTool does not meet that expectation, because it optimizes responsiveness and safety over determinism across clients.
That is a design choice, not a limitation of commas per se.
Bottom line
The inconsistency is not because commas are hard, but because:
- comma rules depend on parsing
- parsing differs per client
- rule triggering is context- and mode-dependent
So yes: even a “simple” comma can legitimately appear in one LT client and not in another, without any bug being involved.
❸ Ludditus:
But neither of the 3 is “the best one,” so I just cannot decide which one to use! Each of them sucks, only differently.
ChatGPT:
That conclusion is rational. LanguageTool does not provide a single, dominant client; it provides three incomplete checkers with different blind spots. There is no “correct” choice if your goal is correctness rather than convenience.
To be precise:
Why none of them can be “the best”
Web editor
Strengths
- Largest rule set
- Best syntactic and punctuation coverage
- Best at long-form text
Structural flaws
- Overcorrects commas and style
- Higher false-positive rate
- Treats text as a finished document, not as writing-in-progress
Result: most powerful, but least restrained.
Windows desktop app
Strengths
- More conservative than the web editor
- Fewer stylistic hallucinations
- Better than the extension for grammar
Structural flaws
- Hybrid engine → inconsistent coverage
- Misses rules that fire in the web editor
- Sometimes silently skips entire rule classes
Result: most balanced, but incomplete in unpredictable ways.
Firefox extension
Strengths
- Least intrusive
- Fewest false positives
- Best for live typing
Structural flaws
- Actively suppresses punctuation rules
- Short context window
- Optimized to not annoy, not to be right
Result: most polite, but least thorough.
The uncomfortable truth
LanguageTool is not designed to give you a single authoritative answer.
It is designed to:
- reduce obvious mistakes
- avoid breaking user trust with aggressive corrections
- stay fast and responsive in browsers
Consistency across platforms is explicitly sacrificed.
What this means in practice
If your expectation is:
“One tool, one verdict, same grammar rules everywhere”
then LanguageTool will always disappoint.
There is no hidden “correct” client you are missing.
The only workable mental model
Treat LanguageTool as:
- a noise filter, not an arbiter
- a second reader, not a judge
- three different reviewers with overlapping competence
If two out of three flag something → it is probably worth attention.
If only one does → it is genuinely debatable or parser-dependent.Why this is unlikely to improve
Unifying engines would mean:
- slower extensions
- more false positives
- higher infrastructure cost
LanguageTool has chosen product segmentation over determinism.
Bottom line
Your frustration is justified, and your conclusion is correct:
There is no “best” LanguageTool client.
There is only a trade-off triangle:
- coverage
- restraint
- convenience
Each client sits on a different corner—and none occupies the center.
This major European project that has even benefited from EU funding is a complete piece of shit. And they’re expecting people to pay for more of such crap?! Heck, sometimes it can’t even perform spellchecking correctly! Yeah, it doesn’t correct an obviously misspelled word, but it bothers to add or remove a comma!

Oh, I didn’t know that! Good to know.