We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don’t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I can entertain the classism argument if they reframe it as a choice, where the alternative is expanding the scope of what is currently considered plagiarism to include the degrees of ghost-authorship privilege buys, since their argument hinges on the assumption that it is acceptable.

    The ableism argument is the one I’ve grappled with the most from the standpoint of disability advocacy. Usually we first must ask whether the achievement in question is the proper measurement. In this case it is quite simply creative origin, which might be difficult to deconstruct further without reaching for the terribly abstract. Next comes the more complicated task of determining the threshold beyond which a simple modifier, like a sports handicap, is simply no longer sufficient, i.e. whether such differing abilities merit a separate category with unique standards. In this case, they provide several examples of cohorts with great enough support requirements that AI assistance might be the only option available for participation. Such differing ability would, I think, suggest the formation of a new category with differing standards as a beneficial compromise.

    The issue of systemic unfairness is a larger one, I think, than the matter of AI’s use can address. When we are looking for ways to mitigate systemic unfairness, usually it’s preferred to relieve each disadvantage directly and surgically by accounting for the cumulative impedance and ongoing support necessary to give them a fighting chance. What is not preferred is to actually fight their battles for them, however, and that happens to be what the latest LLM’s are capable of: robust human-like authorship with minimal prompting.

    Ultimately, I think the real solution to the issue of AI in the liberal arts will be to adapt our notion of what an essentially human achievement entails, given the capacity of current technology. For example, we no longer consider mathematical computation an essential human achievement, but rather the more abstract instrumentation of it. Similarly, handwriting is no longer a skill emphasized for any purpose other than personal note-taking, as with off-hand recall of vocabulary definitions and historical dates. What we will de-emphasize in response to this technology is yet to be seen, but I suspect it will not be creative originality itself.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Given the context is that NaNoWriMo just took on a new AI-based sponsor who they’re promoting hard to their users, there isn’t really much justification to bend this far backwards to concoct an excuse for them.

      • Septimaeus@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Oh, I was actually disagreeing from an educator perspective, I just entertained some of their arguments in case they were serious.

        Edit: apparently the whole post was bad faith drivel. I didn’t know anything about the site until now. Will delete comment.