Writing Is Not That Easy: Grammarly As Affordance.

Written by Neil Levy

I recently received an email from someone about a grant application in which I’m involved.  In this email, the person coordinating the grant asked recipients to suggest revisions to the text, but noted that as it stood it had a score of 100% on Grammarly. He asked that any changes be made carefully, so that this score was retained.

Grammarly uses AI to identify grammatical errors and stylistic infelicities and to suggest changes. I don’t use Grammarly, but other services I use make different suggestions. Gmail, for example, makes reply suggestions and Word underlines spelling mistakes.

These AI-driven tools alter the landscape of affordances for me as a writer. The affordances of an object or an environment are the suggestions for use embedded in it. The handle of a cup affords holding; a gap in a fence affords exiting there. Of course we may ignore or override afforances. If you prefer to hold your cup by the base, ignoring the handle you may do so, and you can climb the fence to leave at some other spot. But it takes effort (sometimes minimal) to override affordances. We usually go with the flow, and develop habits of relying on them. There’s nothing wrong with that: we can spare our energy for other things and in any case, many affordances are well-designed to facilitate action.

I may also ignore the affordances of predictive text and Grammarly nudges. I often do ignore the spelling suggestions Word makes: often, the word it marks as incorrect is a proper name or a technical term.  When I’m unsure about a word, or about a formulation, I go with the flow, however. I accept the suggestion. Sometimes, especially when I’m using a mobile, I’ll use one of the Gmail reply suggestions, usually tweaking it for appropriateness.

What’s wrong with that? As I said, well designed affordances are useful. They enable us to pursue our goals more efficiently. They can also allow us to better coordinate with one another: if the paths funnel foot traffic in different directions onto different trajectories, we need to spend less time negotiating our way round one another. But the rollout of affordances also has a homogenizing effect. This may be especially the case when they’re AI-driven. I don’t know how the algorithms I’m using work, but they may well be based on machine learning using as its database text on the internet. If that’s what is going on, the suggestions will reflect what people already tend to do and reinforce it.

The result may be a loss of linguistic diversity and a sameness of expression. This is likely to be particularly acute for the millions of people who use English as a second language, because they are less likely to feel confident enough to override the suggestions of the AI.

The Sapir-Whorf hypothesis, according to which our language sets the limits of our thought, is surely false in any strong form. We don’t think exclusively in language and even our linguistic concepts are imprecise enough to admit of extension and ambiguity. But language does have an effect on thought. At least one way in which this happens is through the affordances of language: if it becomes easier to refer to a person as a client than a patient, this has downstream effects on what other concepts come to hand for thinking of them and how we relate to them. The homogenizing of language won’t homogenize thought to anything like the same degree, but we may reasons to worry that it will limit intellectual diversity. We should worry about who is designing linguistic affordances and to what ends, and we should worry about the effects of their broad rollout across the world.

  • Facebook
  • Twitter
  • Reddit

9 Responses to Writing Is Not That Easy: Grammarly As Affordance.

  • Anders Sandberg says:

    There is some evidence from text corpora that spellchecking (and before that, the emergence of a single academy-approved spelling standard) has reduced linguistic diversity over the past decades. Of course, much of this diversity is nonsense (‘teh’ and ‘the’ are not equally good words), but the nudge that certain words or expressions are wrong does reduce their usage, even if the wrongness is descriptive (they are rare and not found in training data) rather than prescriptive.

    Then again, people are very inventive in communication. There is a lot of subtle nuance evolving around ‘cringe’, ‘irregardless’ seems to have become a real word now, verbing is becoming more common – it might just be that the formalized text where people run Grammarly and care about being exactly right will increasingly lag the informal text where we run our lives.

    • Neil Levy says:

      Thanks Anders. The inventiveness of people should not be underestimated. A great deal of lingustic innovation comes from ‘below'” I once read a study saying that teenage American black girls were the single most influential innovators on US usage.

  • Keith Tayler says:

    I believe the point you have raised goes to the centre of so-called Artificial Intelligence. Indeed, it was raised by Alan Turing in his famous paper ‘Computing Machinery and Intelligence’, (1950, Mind 49: 433-460), but it is unclear how much weight he placed upon it. (His paper is somewhat lightweight and humorous, as his colleague and friend Robin Gandy reported. (See Copeland, J., ‘The Essential Turing’, OUP, 2004, p.433)) Turing said:

    ‘The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.’ (p.451)

    This passage follows his description of the ‘imitation game’ (i.e., Turing Test) which is, of course, a surprisingly easy test with a very low pass threshold. Turing believed that his original question, “Can machines think?” should be replaced by “Can machines pass the imitation game?” His strict behaviourism leads to the same question being asked of humans. Obviously humans can pass the test, so they too need not be said to ‘think’ any more than machines need be when they pass it. Turing also admits that many of his arguments are unconvincing and says ‘[t]hey should rather be described a “recitations tending to produce belief”’ (his emphasis, p 460). This is all part of the process of changing the ‘use of words and general educated opinion’

    In short, Turing is saying that prior to a machine passing the imitation game, there should be about fifty years when language and educated opinion is altered, a period when we begin to ‘believe’ in machine intelligence, or, as you might put it, time for the landscape of affordances to be altered. Much of this process is done by simpler computer technology prior to the day when it can finally pass the test. Indeed, Turing’s original question has been altered in the way he predicted by so-called AI, it being quite common, especially among the “educated”, to use ‘process(ing)’ instead of ‘think(ing)’. The fact that most AI researchers have now abandoned the Turing Test would not, I believe, have surprised Turing because it no longer needs to be passed. The big crucial experiment does not have to be performed because our language and educated opinion has been so altered the “science” of AI has been normalised.

    I am not as sanguine as you that ‘homogenizing of language won’t homogenize thought.’ Turing is not the only one to set machines easy goals and then ask for the goalposts to be progressively widened. So-called AI appears to advance quickest when it can close domains, i.e., when it can simplify, control and limit a domain or environment. Closing domains widens the goalposts because it makes ‘problems’ tractable. Language is just one more problem AI seeks to solve by closing its domain, one more statistical problem that needs taming. Language has nothing to do with ‘thought’, any more than Deep Blue had to do with chess, or visual recognition software has to do with seeing and perception. AI does not just homogenize language and thought, it reduces them, along with everything else, to the same process. I think it is too late, as you say, to ‘worry about who is designing linguistic affordances and to what ends, and we should worry about the effects of their broad rollout across the world.’ There are attempts to regulate AI, but so long as we believe in the technological imperative there is not much that can be done.

    SFP

    • Neil Levy says:

      Mary Midgley (and others) often accused evolutionary biologists of confusion for using personal level talk for genes (for an example from a slightly different domain, I recently heard Neil Ferguson say something like “the virus just wants to replicate; it doesn’t care whether the people it infects live or die”). She seem to think that they would be misled by the language. But I haven’t seen any evidence that that’s occurred. Taking the intentional stance to a virus or a gene is useful. So though I think you’re right about how usage is changing, I’m not sure it’s a change that we should worry about all that much.

      • Keith Tayler says:

        Language is constantly changing and in most cases, as Wittgenstein said, ‘language can take care of itself’. However, there are obviously cases where language has not been able to look after itself. Midgley was correct to identify the use of ‘meme’ as being misleading by language. Following the tradition of associating the spread of ideas, religion, cultures, etc. with disease and viruses, Dawkins, Dennett, etc, used ‘meme’ as being some kind of ‘information gene’. If it had remained a rhetorical device, analogy, or ‘taproom banter’, as indeed Dawkins ‘sometimes’ claims was his intention, it may have been pretty harmless. However, there is now a whole (pseudo) science of memeology, which Dawkins ‘sometimes’ supports when the mood takes him. As we know from an earlier incarnation of a ‘language’ that was similarly underpinned by pseudo-evolutionary theory, it can be mobilised against minority groups. At least Turing said his arguments were unconvincing but were “recitations tending to produce belief”. He immediately goes on to say that no harm is done so long as what is fact and what is conjecture is clearly stated. He should have known that this distinction is not easy and can be easily distorted.

        I could go on to cite numerous other examples of how language has become distorted or has been deliberately distorted. (Of course, there are distinctions to be made between language and ideas, belief, knowledge, thought, taproom banter, etc., but there is certainly no space for those here.) These are often profound changes that alter the course of human existence, and I am inclined to agree with Wittgenstein when he used the river analogy to describe these profound shifts. At the time of writing, he believed that logic could withhold these changes. However, much of his extensive writings on ‘machine intelligence’ (much of it after the river analogy), would push logic and mathematics into the shifting sands. In my own research, I have traced the progression of ‘mathematical experimentation’ and ‘black box mathematics’ which, as Wittgenstein feared, would push mathematics towards becoming an empirical science. Similarly, changes in the uses and abuses of probability and statistics by so-called AI are continuing to alter our language and thought.

        I hope the above is not so truncated that it is totally unintelligible – half will do.

  • Jeff Lerner says:

    He states that he doesn’t use Grammarly. Perhaps he should — it might have alerted him to a missing word in his final paragraph.

    ” … but we may reasons to worry that it will limit intellectual diversity.”

    • Neil Levy says:

      D’oh! I would have thought Word would have picked that up. Arguably, it’s a collective action problem. I may indeed be better off if I use Grammarly. A loss of diversity isn’t an individual level harm.

  • Miroslav Imbrisevic says:

    The name of the programme is telling: ‘Grammarly’. If people felt more confident about grammar, i.e. understanding grammar, they would be more confident in overriding “Grammarly’s” suggestions. I am a grammar freak, but I still consult others when in doubt: Fowler. I then make up my mind whether to go with Fowler or ignore him. But if you lack grammatical knowledge, then you just have to ‘obey’ the programme. This leads to the infantilisation of the language user. A solid grounding in grammar is a good thing. Why would I let a piece of software tell me how to compose my sentences – it just doesn’t ‘know’ enough about style and literary devices. I will listen to another writer, but not to a dead piece of software.

  • Antoniopop says:

    It flags readability issues based on: Word-count Character count Reading time Speaking time Then, using AI technology, Grammarly proposes rewrites that are more concise and clearer for the reader. Read more about the Grammarly Readability score Writing style Grammarly provides word choice suggestions and rewrites based on the ideal audience and tone of a piece, as determined by the writer.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use the <em>, <strong> and <blockquote> tags. Links have been disabled to combat spam.

Notify me of followup comments via e-mail. You can also subscribe without commenting.

Authors

Affiliations