Comments are welcome and appreciated, especially from members of NRDLemmI. For fun here’s an (edited) ChatGPT TL;DR:

  • These guidelines try to balance the benefits of LLMs against negative impacts on communities.
  • Follow the rules of the communities and instances you participate in, including rules regarding LLM content.
  • It may be reasonable to seek clarification or leave a community, but don’t continue violating rules and don’t argue.
  • If you primarily sharing your own thoughts and use an LLM for editing or other assistance, most of this is unlikely to be an issue.
  • Sharing an unedited/unreviewed response solely generated by an LLM adds little to the conversation and is likely to be an issue.
  • Users employing LLMs are welcome on NRDLemmI, but if consistent rule violations occur on other instances, bans or other actions may be necessary.

Background

The landscape of LLMs and other AI tools is constantly evolving, and the technology will likely never be worse or more inaccessible than it is right now. Regardless of your personal opinions on these tools, they will certainly have an impact on NRDLemmI as well as the broader fediverse. As with many of the decisions I am making regarding how NRDLemmI will run, I hope to strike a balance between the benefits of LLMs and negative impacts the content LLMs generate can have on communities.

This approach will not be perfect. It will need to evolve alongside the tools and alongside the fediverse itself. The most important thing is that everyone here understand our approach and the reasoning behind it.

The rules here

NRDLemmI does not currently have a specific rule against posting content generated by AI/LLMs; however, there are two existing rules that are relevant to the decision-making process when addressing reports or complaints about such content.

  1. Don’t do things to adversely impact federation with other servers.
  1. Respect the rules of the communities in which you participate.

I may elaborate further on the reasoning behind these rules later. For now, it’s important to understand that these rules aim to strike a balance between free speech and the ability of our instance’s users to participate in the broader fediverse (As well as limiting legal or hosting-related consequences, as that would impact both).

Rule 6 can be seen as a corollary to rule 3, as multiple rule violations within external communities could potentially lead to our instance being blocked from federating with those external instances.

The rules elsewhere

Regardless of what you are posting, it is essential to be mindful of and do your best to follow the rules of the community and instance you are participating in. This is particularly crucial when posting potentially inflammatory, self-promoting, or LLM-generated content. Specifically, when posting LLM content, be sure to check the sidebar of both the instance and the community for:

  • Specific rules regarding the use of LLMs
  • Rules against “low-effort” content or comments
  • Rules against spamming
  • Rules requiring citations

If there is a specific rule either for or against the use of an LLM, the answer is straightforward: do not post LLM generated content there. If the rule pertains to any of the other points mentioned, it is up to you to determine whether the mod/admin will view the LLM-generated content as a violation.

When you break the rules

Let’s suppose a mod/admin takes action against your post. Was the rule clear? If so, you deserved the consequences, as you shouldn’t expect to go unnoticed. If the rule was ambiguous or subject to interpretation, the mod/admin’s action indicates how they interpret the rule. In such cases, it is perfectly reasonable to:

  • Stop posting or commenting in the community.
  • Leave the community and/or instance.
  • Apologize.
  • Feel sad and/or angry.

In certain situations, it may also be reasonable to:

  • Seek clarification to better follow their rules in the future.
  • Share your interaction elsewhere, as long as it’s not intended/likely to cause brigading or other retributive acts.

It is never reasonable to:

  • Continue posting LLM content that is likely to violate their rules.
  • Argue with or harass the mod/admin.
  • Complain to the instance’s admins about the mod enforcing the community rules.
  • Complain to your admin about an admin on a different instance.

My interpretation of the “gray area”

If your post consists primarily or entirely of your original thoughts and you use an LLM only for editing, phrasing, grammar, or to reduce the level of detail, mods/admins are unlikely to have an issue with it. Likely they won’t be able to tell that an LLM was involved, just like they can’t tell Grammarly or Spelling/Grammar checkers were involved. The thoughts and knowledge remain your own or, at the very least, represent something you researched while writing your post. In this case, you might even be able to bend the rule against LLM content since you’re sharing your content with only assistance from an LLM.

However, if you instruct the LLM to “Write me a comment refuting this post: [post text],” the thinking and opinion belong to the LLM rather than you. Sharing a response you didn’t write adds little value to the conversation since you won’t be able to further engage. Additionally, longer LLM-generated posts, especially on narrow topics (where they likely don’t have much/up-to-date/good training material), often have a discernible “uncanny valley” quality and can be easily identified.

Contrast this with a situation where someone shares an article written by another person that refutes a post. If someone comments, saying, “Sarah Whatshername wrote an excellent response to this, where she mentioned that […some info/quotes/whatever…]. I think it’s worth reading before fully embracing this viewpoint,” they are appropriately crediting the author, highlighting relevant parts of the author’s opinion, and, if possible, providing a link to the source.

How to avoid sticky situations

If things you post are repeatedly reported as LLM-generated, it suggests that you may be leaning toward misusing these tools, and action may be necessary. If it becomes apparent that you lack expertise in the discussed topic (which should be evident to the mod of a community focused on that subject), action may also be required. However, if someone says, “I heard [X], is it true?” or approaches a topic they are unfamiliar with in a curious and constructive manner, it is less likely to warrant action even if an LLM is somehow involved.

In general, if someone wants to know what ChatGPT/LLaMA/Bard/whatever “thinks” about a post or how it would refute it, they will ask that thing for a response. Simply regurgitating its answer, particularly when you lack the expertise to assess its quality or accuracy, at best contributes little to the conversation.

When things get sticky on this instance or in communities I moderate

Users employing LLMs are welcome on this instance and in any of the communities I moderate, as long as human thinking remains the primary driver. If someone’s posts start receiving reports (especially if admins threaten to block my instance), I will review the user’s posts and comments then engage them in a conversation covering the topics mentioned above. If, as a result of this discussion, the user consistently fails to comply with community rules on other instances or refuses to adjust their tool usage appropriately, they will be banned from the community/instance.