Skip to content

Add an LLM policy for rust-lang/rust#1040

Open
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy
Open

Add an LLM policy for rust-lang/rust#1040
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy

Conversation

@jyn514
Copy link
Copy Markdown
Member

@jyn514 jyn514 commented Apr 17, 2026

View all comments

FCP link

this comment

Summary

This document establishes a policy for how LLMs can be used when contributing to rust-lang/rust. Subtrees, submodules, and dependencies from crates.io are not in scope. Other repositories in the rust-lang organization are not in scope.

This policy is intended to live in Forge as a living document, not as a dead RFC. It will be linked from CONTRIBUTING.md in rust-lang/rust as well as from the rustc- and std-dev-guides.

Moderation guidelines

This PR is preceded by an enormous amount of discussion on Zulip. Almost every conceivable angle has been discussed to death; there have been upwards of 3000 messages, not even counting discussion on GitHub. We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself. In particular, we mark several topics as out of scope below. We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

So, the following are considered off topic for this PR specifically:

  • Long-term social or economic impact of LLMs
  • The environmental impact of LLMs
  • Anything to do with the copyright status of LLM output. We have gotten initial confirmation from US lawyers that LLM output likely doesn't cause issues under US law. We still need to confirm this with EU lawyers and in other parts of the world.
  • Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules. For an extended rationale, please see this comment.

Feedback guidelines

We are aware that parts of this policy will make some people very unhappy. As you are reading, we ask you to consider the following.

  • Can you think of a concrete improvement to the policy that addresses your concern? Consider:
    • Whether your change will make the policy harder to moderate
    • Whether your change will make it harder to come to a consensus
  • Does your concern need to be addressed before merging or can it be addressed in a follow-up?
    • Keep in mind the cost of not creating a policy.

If your concern is for yourself or for your team

  • What are the specific parts of your workflow that will be disrupted?
    • In particular we are only interested in workflows involving rust-lang/rust. Other repositories are not affected by this policy and are therefore not in scope.
  • Can you live with the disruption? Is it worth blocking the policy over?

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

Motivation

  • Many people find LLM-generated code and writing deeply unpleasant to read or review.
  • Many people find LLMs to be a significant aid to learning and discovery.
  • rust-lang/rust is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
    • Having a policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is not intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs. It is only intended to set out the future policy of rust-lang/rust itself.

Ethical issues

See this thread.

Drawbacks

  • This bans some valid usages of LLMs. We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
  • This intentionally does not address the moral, social, and environmental impacts of LLMs. These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
  • This intentionally does not attempt to set a project-wide policy. We have attempted to come to a consensus for upwards of a month without significant progress. We are cutting our losses so we can have something rather than adhoc moderation decisions.
  • This intentionally does not apply to subtrees of rust-lang/rust. We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

Rationale and alternatives

  • We could create a project-wide policy, rather than scoping it to rust-lang/rust. This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date. It has the disadvantage that we think it is nigh-impossible to get everyone to agree. There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
  • We could have different standards for people in the Rust project than for new contributors. That would make moderation much easier, and allow us to experiment with additional LLM use. However, it reinforces existing power structures, creates more of a gap between authors and reviewers, and feels "unfriendly" to new contributors.
  • We could have a more lenient policy that allows "responsible and appropriate" use of LLMs. This raises the question of what "responsible and appropriate" means. The usual suggestion is "self-review, and judging the change by the same standard as any other change"; but this neglects the reputational and social harm of work that "feels" LLM generated. It also makes our moderation policy much harder to understand, and increases the likelihood of re-litigating each moderation decision.
  • We could have a more strict policy that removes the threshold of originality condition. This has the advantage that our policy becomes easier to moderate and understand. It has the disadvantage that it becomes easy for people to intend to follow the policy, but be put in a position where their only choices are to either discard the PR altogether, rewrite it from scratch, or tell "white lies" about whether an LLM was involved.
  • We could have a more strict policy that bans LLMs altogether. It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.
  • We could have no policy at all. This avoids banning valid use cases; avoids implicitly legitimizing the use of LLMs by setting a policy; and saves all of us a great deal of time and effort. However, it greatly increases the baseline level of distrust within the project; makes review assignment even more of a dice roll than it is already; wastes a great deal of contributor and moderator time dealing with LLM-authored PRs on a case-by-case basis; and gives no guidance for people who really do want to use an LLM in a responsible way.

Prior art

This prior art section is taken almost entirely from Jane Lusby's summary of her research, although we have taken the liberty of moving the Rust project's prior art to the top. We thank her for her help.

Rust

Other organizations

These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.

  • full ban
    • postmarketOS - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS - multi point ethics based rational with citations included
    • zig
      • philosophical, cites Profession (novella)
      • rooted in concerns around the construction and origins of original thought
    • servo
      • more pragmatic, directly lists concerns around ai, fairly concise
    • qemu
      • pragmatic, focuses on copyright and licensing concerns
      • explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
    • forgejo
      • bans AI for review, code, documentation, and communication
      • mentions "legal uncertainties" as a motivating factor
      • explicitly excludes machine translation
  • allowed with supervision, human is ultimately responsible
    • scipy
      • strict attribution policy including name of model
    • llvm
    • blender
    • linux kernel
      • quite concise but otherwise seems the same as many in this category
    • mesa
      • framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.

        Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.

    • firefox
    • ghostty
      • pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
    • fedora
      • clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
  • curl
    • does not explicitly require humans understand contributions, otherwise policy is similar to above policies
  • linux foundation
    • encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
  • In progress

Unresolved questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.

Rendered

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 17, 2026
@rustbot
Copy link
Copy Markdown
Collaborator

rustbot commented Apr 17, 2026

r? @jieyouxu

rustbot has assigned @jieyouxu.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

Why was this reviewer chosen?

The reviewer was selected based on:

  • Fallback group: @Mark-Simulacrum, internal-sites
  • @Mark-Simulacrum, internal-sites expanded to Mark-Simulacrum, Urgau, ehuss, jieyouxu
  • Random selection from Mark-Simulacrum, Urgau, ehuss, jieyouxu

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

@rustbot label T-libs T-compiler T-rustdoc T-bootstrap

@rustbot rustbot added T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc labels Apr 17, 2026
Comment thread src/policies/llm-usage.md Outdated
## Summary
[summary]: #summary

This document establishes a policy for how LLMs can be used when contributing to `rust-lang/rust`.
Subtrees, submodules, and dependencies from crates.io are not in scope.
Other repositories in the `rust-lang` organization are not in scope.

This policy is intended to live in [Forge](https://forge.rust-lang.org/) as a living document, not as a dead RFC.
It will be linked from `CONTRIBUTING.md` in rust-lang/rust as well as from the rustc- and std-dev-guides.

## Moderation guidelines

This PR is preceded by [an enormous amount of discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/channel/588130-project-llm-policy).
Almost every conceivable angle has been discussed to death;
there have been upwards of 3000 messages, not even counting discussion on GitHub.
We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself.
In particular, we mark several topics as out of scope below.
We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

No comment on this PR may mention the following topics:

- Long-term social or economic impact of LLMs
- The environmental impact of LLMs
- Anything to do with the copyright status of LLM output
- Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules.

## Feedback guidelines

We are aware that parts of this policy will make some people very unhappy.
As you are reading, we ask you to consider the following.

- Can you think of a *concrete* improvement to the policy that addresses your concern? Consider:
  - Whether your change will make the policy harder to moderate
  - Whether your change will make it harder to come to a consensus
- Does your concern need to be addressed before merging or can it be addressed in a follow-up?
  - Keep in mind the cost of *not* creating a policy.

### If your concern is for yourself or for your team
- What are the *specific* parts of your workflow that will be disrupted?
  - In particular we are *only* interested in workflows involving `rust-lang/rust`.
    Other repositories are not affected by this policy and are therefore not in scope.
- Can you live with the disruption? Is it worth blocking the policy over?

---

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

## Motivation
[motivation]: #motivation

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- `rust-lang/rust` is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
  - Having *a* policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is *not* intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs.
It is only intended to set out the future policy of `rust-lang/rust` itself.

## Drawbacks
[drawbacks]: #drawbacks

- This bans some valid usages of LLMs.
  We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
- This intentionally does not address the moral, social, and environmental impacts of LLMs.
  These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
- This intentionally does not attempt to set a project-wide policy.
  We have attempted to come to a consensus for upwards of a month without significant process.
  We are cutting our losses so we can have *something* rather than adhoc moderation decisions.
- This intentionally does not apply to subtrees of rust-lang/rust.
  We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

## Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

- We could create a project-wide policy, rather than scoping it to `rust-lang/rust`.
  This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date.
  It has the disadvantage that we think it is nigh-impossible to get everyone to agree.
  There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
- We could have a more strict policy that removes the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html) condition.
  This has the advantage that our policy becomes easier to moderate and understand.
  It has the disadvantage that it becomes easy for people to intend to
  follow the policy, but be put in a position where their only choices
  are to either discard the PR altogether, rewrite it from scratch, or
  tell "white lies" about whether an LLM was involved.
- We could have a more strict policy that bans LLMs altogether.
  It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

## Prior art
[prior-art]: #prior-art

This prior art section is taken almost entirely from [Jane Lusby's summary of her research](rust-lang/leadership-council#273 (comment)),
although we have taken the liberty of moving the Rust project's prior art to the top.
We thank her for her help.

### Rust
- [Moderation team's spam policy](https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#fully-or-partially-automated-contribs)
- [Compiler team's "burdensome PRs" policy](rust-lang/compiler-team#893)
### Other organizations
 These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
- full ban
  - [postmarketOS](https://docs.postmarketos.org/policies-and-processes/development/ai-policy.html)
        - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS
        - multi point ethics based rational with citations included
  - [zig](https://ziglang.org/code-of-conduct/)
    - philosophical, cites [Profession (novella)](https://en.wikipedia.org/wiki/Profession_(novella))
    - rooted in concerns around the construction and origins of original thought
  - [servo](https://book.servo.org/contributing/getting-started.html#ai-contributions)
    - more pragmatic, directly lists concerns around ai, fairly concise
  - [qemu](https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-content-generators)
    - pragmatic, focuses on copyright and licensing concerns
    - explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
- allowed with supervision, human is ultimately responsible
  - [scipy](https://github.com/scipy/scipy/pull/24583/changes)
    - strict attribution policy including name of model
  - [llvm](https://llvm.org/docs/AIToolPolicy.html)
  - [blender](https://devtalk.blender.org/t/ai-contributions-policy/44202)
  - [linux kernel](https://kernel.org/doc/html/next/process/coding-assistants.html)
    - quite concise but otherwise seems the same as many in this category
  - [mesa](https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/submittingpatches.rst)
    - framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.
        > Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.
  - [forgejo](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md)
    - bans AI for review, does not explicitly require contributors to understand code generated by ai.
      One could interpret the "accountability for contribution lies with contributor even if AI is used" line as implying this requirement, though their version seems poorly worded imo.
  - [firefox](https://firefox-source-docs.mozilla.org/contributing/ai-coding.html)
  - [ghostty](https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md)
    - pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
  - [fedora](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/)
    - clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
- [curl](https://curl.se/dev/contribute.html#on-ai-use-in-curl)
  - does not explicitly require humans understand contributions, otherwise policy is similar to above policies
- [linux foundation](https://www.linuxfoundation.org/legal/generative-ai)
  - encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
- In progress
  - NixOS
    - NixOS/nixpkgs#410741

## Unresolved questions
[unresolved-questions]: #unresolved-questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.
Copy link
Copy Markdown
Member

@jieyouxu jieyouxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this version, and thanks a ton for working on it. Specifically:

  • It doesn't try to dump entire walls of text, which is unfortunately a good way to be sure nobody reads it. Instead, it gives you concrete examples, and a guiding rule-of-thumb for uncovered scenarios, and acknowledges upfront that it surely cannot be exhaustive.
  • I also like where it points out the nuance and recognizes the uncertainties.
  • I like that it covers both "producers" and "consumers" (with nuance that reviewers can also technically use LLMs in ways that are frustrating to the PR authors!)

I left a few suggestions / nits, but even without them this is still a very good start IMO.

(Will not leave an explicit approval until we establish wider consensus, which likely will take the form of 4-team joint FCP.)

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@ChayimFriedman2
Copy link
Copy Markdown

The links to Zulip are project-private, FWIW.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

The links to Zulip are project-private, FWIW.

I'm aware. This PR is targeted towards Rust project members moreso than the broad community.

Copy link
Copy Markdown
Member

@davidtwco davidtwco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with this as an initial policy for the rust-lang/rust repository.

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/how-to-start-contributing.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@ChayimFriedman2
Copy link
Copy Markdown

ChayimFriedman2 commented May 15, 2026

We could have a more strict policy that bans LLMs altogether. It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

That would be for the best; I would vastly prefer a slower pace of development than the risks inherent to language model generated code in the compiler I use explicitly for its safety and reliability. The moderation guidelines are incredibly disappointing and declare the most important reasons to avoid LLM usage off-limits in order to justify a looser policy because some people will like it. Lack of ability to come to consensus does not mean these issues are not relevant to the policy; it means they are being explicitly disregarded for the purposes of pushing this policy forward.

There are clearly some people who disagree with you. So you want to force your vision onto them? As a community-organized project the only way to pass a policy is to have consensus on it, and that consensus will come from project members, not from the outside.

(As an aside please respect the note about inline comments in RFCs).

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 15, 2026

@AthenaEryma have you read the text of the policy? I think you'll find it's much closer to what you want than you might expect.

As a general note I want to say that the people who drafted, discussed, and approved this policy are exactly the same people in charge of merging code to rust-lang/rust. If you trust our judgement for creating a compiler that works safely and reliably, I ask you to also trust our judgement that we can allow LLMs in a way that doesn't affect that safety and reliability. I personally work at Ferrous-Systems on IEC 61508 SIL-2 software, and we sell a modified version of rustc to our customers. I feel comfortable that the rules in our policy won't hurt the reliability of our or their software, and I sign my name to that fact every 3 months to TÜV.

@AthenaEryma
Copy link
Copy Markdown

AthenaEryma commented May 15, 2026

@ChayimFriedman2 That is rather my point: there is not consensus. This is pretending at consensus by declaring half the points against one side off limits.

@jyn514 I did read the text of the policy. While it's better than I would have expected given the ideological restrictions given, it still permits LLM-generated code. Under quite limited circumstances, yet still more than I am comfortable with - and again, strict ideological limits must be placed on the discussion to even make that appear to be acceptable.

I trust humans to write and review human-generated code. I do not trust humans, long-term, to be able to manage the output of a machine specifically designed to produce good-looking output - their design and practical effect is to increase reliance on them while degrading the skill and judgement of the user. However, this is certainly into the list of unacceptable topics I'm not allowed to discuss.

Comment thread src/policies/llm-usage.md
Copy link
Copy Markdown

@clarfonthey clarfonthey May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Threaded to push the discussion into threads.

I concede that getting out a policy soon is an incredible gain and find it outright disrespectful for people to stonewall the discussion on pedantic grounds. More examples show up every day of where we really need a policy to decide what to do at least on rust-lang/rust and the longer we spend bickering, the more frustrating things get.

On that note, I am tired of people pretending that the omission of ethics from the discussion is anything but a massive concession here to please people who are so childish they refuse to discuss the absolute atrocities committed by the people who make the tools they love.

Sorry, but grow up.

If you think that the benefits of these tools is justified, own it. Let us know you've used them so we can review them accurately. If you don't want to admit that you used them because you're worried people might judge you for making a potentially negative ethical decision, then are you conceding that the discussion should include ethics? Because you can't simultaneously request that ethics not be used to block your use of these tools, while requesting that people not judge you for using them because of ethics. Either ethics is on the table and we fully account for that, or ethics are not on the table and you have to deal with the consequences of that. You can't have both.

As I've alluded to in other policy discussions, there are only two valid solutions to people being judged for LLM usage:

  1. Don't use LLMs, so they don't get judged
  2. Use LLMs, and assert that the judgements are invalid

Don't just hide and pretend that people aren't allowed to have opinions of you. Because while we certainly can ask people to avoid discussing it in threads, we absolutely cannot prevent people from having opinions. There are no thought crimes in the rust-lang org. People can and will have feelings about what you do and it's even in our fucking COC:

Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

You can't hide from opinions. So stop trying to influence policy because you're worried about others' opinions of you; own those opinions, and either call them out as wrong or accept them as right and change your actions.

I'm tired of people dancing around the issue of disclosure with vague concepts like "but people might dislike me uwu," grow the fuck up. We ask you to use these tools because they have unprecedented effects on software development and it is so much easier to just ask everyone to state whether they're used than come up with some vague guidelines for why it's okay to not mention it that people will have no idea how to follow. If you're worried about the judgements from people finding out that you're using it, that's a you problem. This isn't some vague "overstepping;" just like how it's not overstepping to ask people what version of Rust they're using when filing bugs, it's not overstepping to ask people to just tell us when they're using an LLM in an explicitly numerated list of cases.

As always, various shortcomings of the policy can and should be improved. For example, clarifying where submodules stand relative to this policy is important and good. Nitpicking about when you have to disclose stuff is not good. Because I'm assuming that everyone discussing here is competent, I'm assuming that you're not stupid and know exactly what you're arguing when you ask to not disclose LLM usage. I'm asking you to either own what you're doing or give it up rather than weaken the effectiveness of the policy.

Similarly, for the folks saying that the omission of ethics is bad for this policy, I ask you to kindly leave that out from this discussion and take it into my RFC instead.

Thanks.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand you're frustrated, but I don't think this kind of language helps anyone.

About your point: you explicitly state that people are allowed to have their own opinion, but apparently they are not allowed, per you, to have an opinion on whether the discussion of ethics is relevant here.

And for the record, I don't use LLMs and I dislike them for many reasons. But I also think that setting a policy on the basis of the ethical consequences of them is not what we should do.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The missing link to said RFC: rust-lang/rfcs#3959

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not the one who decided to omit ethics from the discussion here; I'm respecting the opinions of the author on that, and conceding that it's fair to get a simpler policy out sooner in this particular case. There are allowed to be concurrent policy decisions and this policy's opinion is that focusing entirely on pragmatic labelling of LLM contributions, alongside calling out certain problematic usages, is the best way to get a policy out sooner.

Is that the best choice? No idea! But I respect the author and their wishes and have an explicit alternative that will almost certainly take longer to figure out. So, instead of bothering jyn about it, I'm inviting the discussion to happen there instead.

@ismay-tue
Copy link
Copy Markdown

ismay-tue commented May 15, 2026

No comment on this PR may mention the following topics:

Long-term social or economic impact of LLMs
The environmental impact of LLMs
Anything to do with the copyright status of LLM output
Moral judgements about people who use LLMs

Why is that not allowed?

Copy link
Copy Markdown

@ChayimFriedman2 ChayimFriedman2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AthenaEryma, opening this as an inline discussion:

That is rather my point: there is not consensus. This is pretending at consensus by declaring half the points against one side off limits.

It does seem that there is no consensus currently, and maybe there never will be. This policy also does not bans discussions of your concerns. But the consensus we strive for is consensus for project members. If, hypothetically, all members in the project will agree this is an acceptable policy, it will be accepted. People outside the project are allowed to express their opinion, but they do not have a vote right and certainly not a veto right. As @jyn514 has put it: you trust us to develop Rust, you will have to trust us with this as well.

View changes since this review

Edit: Oops, sorry, didn't open an inline discussion. Please do so in your next comment.

@ismay-tue
Copy link
Copy Markdown

you trust us to develop Rust, you will have to trust us with this as well.

If I trust your judgement on code, that does not imply I trust your judgement in other areas.

@ChayimFriedman2
Copy link
Copy Markdown

you trust us to develop Rust, you will have to trust us with this as well.

If I trust your judgement on code, that does not imply I trust your judgement in other areas.

Then you're welcome to fork Rust and do with your fork what you want.

@rust-lang rust-lang locked as too heated and limited conversation to collaborators May 15, 2026
@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 15, 2026

I'm about to close my browser for the day because I'm absolutely exhausted, but I would like to say one last thing.

I recognize that there is a power imbalance here. I recognize that people outside the project feel helpless to influence decisions because they don't have commit rights, and that you're all commenting here in violation of the moderation policy because you don't have any idea how else to make your voice heard. I'm sorry about that. I don't like the way that software is built and published, and that includes Rust.

I don't have any solutions here. All I can do is offer to listen. You can find my email on my website, and I promise if you email me there I will hear you out.

@jackh726
Copy link
Copy Markdown
Member

jackh726 commented May 15, 2026

@rust-rfcbot resolve restrictive-but-not-time-limited

The additional section on experimental LLM usage resolves my concern. Although there is no "time limit" or otherwise automatic criteria on this policy expiring, I also think that we're going to be hard-pressed to find a better policy anytime soon. I am fairly confident that the discussion is not over after this policy is merged, and that discussion will only get more refined at we gain experience and evidence.


On some specific points raised since my last comments:

re. "committing the Project": yes, I do think it does in some sense, but I also think this is well-balanced enough with enough stakeholders that introducing extra process to this (by bringing in the LC) isn't going to help anything. I would encourage the LC to talk about this and give thoughts about any concrete ways to improve the policy - I'm sure jyn would do a great job incorporating them.

re. the policy being long and hard to follow (Niko's comment): Yeah, I understand this. I think it is a bit long, and can be hard to follow. I also think that's just the state of the conversation right now and nuance we have to draw. I would love to find ways to summarize this better, give easier-to-grasp guidelines, etc. after this is merged (and I think that's okay).

re. experimental usage: I think we can figure out in practice is the conditions/process here makes sense or not. I personally would love for us to have a better intake system for LLM-generated code than "you need to find a reviewer first" - but I practically just don't know how to make that happen right now, both due to limited review capacity and in the technical logistics of it. I think the condition of "no soundness-critical code" is largely left to the discretion of the reviewer here - but I think the guidelines here are good. It'll help imo that the changes should be talked about ahead of time with the reviewer.

@nikomatsakis
Copy link
Copy Markdown
Contributor

nikomatsakis commented May 15, 2026

I've been thinking over this policy. What I like about is it that it moves slowly and carefully, leaving room for LLM-authored content while making the default negative.

However, I think what really bothers me about is that it signals such a low trust in our fellow reviewers. I don't like these complex lists of rules about when you can and can't use an LLM and what kind of code can and can't be written with it etc. I think if we are going to allow some folks to review LLM-authored content, let's just do it, and assume that we can all talk to one another if we find the standard of review doesn't seem high enough.

I spent a few minutes that I should've been spending preparing for RustWeek banging out a draft of an alternative framing of this policy that I think captures its spirit in a higher-level, simpler way:

https://hedgedoc.nikomatsakis.com/s/uQAxCxrrS

I don't know how I feel 100% about that policy as I wrote it, but I think I could live with it.

Clarification: I don't know what the "details" of the experimentation policy should be for how LLM-content gets reviewed. I just think that there should be a group of people who are experimenting with the best option. It might be that initially unsolicited LLM-authored PRs are closed and the author is referred to Zulip. It might be that they are assigned someone from a separate pool. I don't think we have to overspecify it here.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 16, 2026

No comment on this PR may mention the following topics:

Why is that not allowed?

Because we've already talked about it over and over and over and over. Believe me, every point about those that's been raised on this PR and on Fedi has been already discussed at length and in exhausting detail. We came to the conclusion that the people with ethical concerns will not be satisfied by anything less than a full ban, and we do not have a consensus to pass a full ban. So reiterating the concerns just makes people mad at each other without helping anything.

I've seen the sentiment a few times "I would rather the Rust project lose people than start using LLMs". I don't agree with it but I can understand it. I ask you this though: Would you rather have no Rust project than a Rust project that uses LLMs? Would you rather have no policy at all?

@rust-lang rust-lang unlocked this conversation May 16, 2026
@dballesg
Copy link
Copy Markdown

"people with ethical concerns will not be satisfied by anything less than a full ban"

Obvious. Those ethical concerns come from people that have strong ethical values, and share deep concerns of the inequality created and amplifyed by this "technologies" worldwide.

Dont talk about the elephant in the room doesn't make it dissappear.

I would prefer Rust to dissapear if is going to allow non working and non functional code, that will plague it with thousands of bugs to be committed into it.

BTW wasn't one of the reasons to create Rust to have a safe, almost bugless programming language? 🤔

Whomever says LLMs work to generate working code is lying and they know it. So full ban or nothing.

@anotak
Copy link
Copy Markdown

anotak commented May 16, 2026

Hi, longtime rust user here, not frequently interacting with the development but i have read from time to time over the years. I read the LLM policy and the feedback guidelines.

I think the very fact that it was felt necessary to have the moderation policy in place for this discussion should give serious pause on allowing any LLM contributions at all, even with the caveats in the policy. Putting aside all the actual banned topics, the fact that it comes with so much baggage itself creates so much social friction. That friction will harm the larger Rust community. The fact that folks who have never interacted with Rust development are here and upset should indicate how deeply serious this is for the stability of the community. I think that friction will be much more costly than the benefits of allowing LLM-authored code, even with the caveats in place in the current PR.

The decisions here have a top-down effect on the larger culture of Rust. They should not be viewed in isolation of just the language itself. The language and compiler development team sets a lot of the cultural norms.

@ismay-tue
Copy link
Copy Markdown

No comment on this PR may mention the following topics:

Why is that not allowed?

Because we've already talked about it over and over and over and over. Believe me, every point about those that's been raised on this PR and on Fedi has been already discussed at length and in exhausting detail. We came to the conclusion that the people with ethical concerns will not be satisfied by anything less than a full ban, and we do not have a consensus to pass a full ban. So reiterating the concerns just makes people mad at each other without helping anything.

I've seen the sentiment a few times "I would rather the Rust project lose people than start using LLMs". I don't agree with it but I can understand it. I ask you this though: Would you rather have no Rust project than a Rust project that uses LLMs? Would you rather have no policy at all?

Thank you for explaining, because I was genuinely interested. So if I understand you correctly, people had strong opposing views wrt. LLMs and the debate got rather intense. That can be difficult.

I would personally recommend to not shy away from including all aspects of this debate in your decision making process though. I think that the only way you'll get people to understand eachother, and be able to work with eachother, is to make everyone involved feel (and be) heard. That is a difficult process to get right.

If such a process is causing conflict, then I would recommend you to involve people who are experienced at guiding such processes. What you are dealing with is not just an engineering problem, but a social process as well. So rather than whittling it down to a pure engineering issue, engage with the social aspect as well. If that is not going well, then please involve people experienced in doing so, because otherwise this debate will keep having an effect on your community.

I'll bow out of this issue, but I hope that that helps and I sincerely hope that you'll be able to continue this process in a productive, inclusive manner. I hope you didn't mind me commenting, but I've personally seen a lot of organisations affected by engineers who are uncomfortable with tackling difficult social dynamics. That is understandable, but know that there are entire professions dedicated to dealing with these types of situations in a productive, professional manner (i.e. organisational psychologists, etc.).

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 16, 2026

I would personally recommend to not shy away from including all aspects of this debate in your decision making process though. I think that the only way you'll get people to understand eachother, and be able to work with eachother, is to make everyone involved feel (and be) heard.

The problem here is who counts as "everyone". I think everyone in the project does feel at least heard at this point, even if they don't like the policy or don't feel represented. But coming to a consensus between everyone in the world is simply not possible. At some point you have to pick and choose who gets a say.

We chose (we have chosen, we will continue to choose) people in the project as the ones who get a say. We want to hear input from the outside community to make sure we aren't isolating ourselves, but that's not the same as letting them make decisions for us. @alice-i-cecile says this very well in this comment.

The moderation guidelines were written with project members in mind, and I checked with anti-AI people like @jdonszelmann beforehand that they were ok with introducing those restrictions. Because it was discussed beforehand, I didn't think to include a rationale for it. I have now done so, you can read it here.

@TheLastZombie
Copy link
Copy Markdown

Would you rather have no Rust project than a Rust project that uses LLMs?

I mean, I'd kinda prefer the former. If the only way for a project to exist was by using an inherently harmful and fascist technology, it's worth taking a step back and asking whether the pros really still outweigh the (innumerable) cons.

@nnethercote
Copy link
Copy Markdown

Some thoughts I have had while reading these comments.

  • The Rust project currently has almost no restrictions on LLM use at all.
  • This policy would put significant restrictions on LLM use.
  • The Rust project has members with a wide range of opinions on LLM usage, from very negative to very positive.
  • The Rust project has a decision making process that is heavy on consensus. There is no benevolent dictator.
  • "Politics is the art of the possible."

Comment thread src/policies/llm-usage.md
Comment on lines +198 to +201
### The meaning of "originally created"

This document uses the phrase "originally created" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change how it was originally created; how it was generated sets the initial style and it's very hard to change once it's set.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice that the policy does not specifically mention LLM-backed autocompletion anywhere. I suspect this may lead to confusion later. Editor autocompletions are now willing to fill in entire functions and files, but people seem to mentally bucket this in a different category.

Given the intent of the policy, I'd expect it might want to say something along the lines of:

This includes use of editor autocompletion when that feature is implemented using an LLM (check the documentation). Once you accept a nontrivial autocompletion, the work is "originally created" by an LLM and no amount of editing (other than deleting it entirely) can remove that.

I'm not sure where you'd want to put this.

View changes since the review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

I-council-nominated Nominated for discussion during a council meeting. needs-fcp This change is insta-stable, so needs a completed FCP to proceed. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc T-types Team: Types

Projects

None yet

Development

Successfully merging this pull request may close these issues.