Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
772edeb
Add an LLM policy for `rust-lang/rust`
jyn514 Apr 17, 2026
815da6e
address some of jieyouxu's comments
jyn514 Apr 17, 2026
17a35f4
revert extraneous change
jyn514 Apr 17, 2026
61e5e2c
address some more review comments
jyn514 Apr 18, 2026
8ee5ed4
more review comments
jyn514 Apr 18, 2026
7cd8c17
more wording
jyn514 Apr 18, 2026
2db7465
rewrite "trivial changes" section
jyn514 Apr 21, 2026
9b2b3c2
rewrite intro to 'Allowed with caveats'
jyn514 Apr 21, 2026
e3b1394
Be more specific in "Moderation policy"
jyn514 Apr 21, 2026
e3f2aec
Add explicit conditions for modification or removal
jyn514 Apr 21, 2026
864428f
mention that the policy is intentionally conservative
jyn514 Apr 21, 2026
593d538
extend "Penalties" section with sentencing guidelines
jyn514 Apr 21, 2026
75050a2
be more clear where the CoC is invoked
jyn514 Apr 23, 2026
b6a8662
minor edits; add "Motivation and guiding principles" section
jyn514 Apr 28, 2026
9a944f7
Relax and clarify moderation guidelines
jyn514 Apr 28, 2026
14956c3
Carve out a space for experimentation
jyn514 Apr 28, 2026
8fe7281
fix typo
jyn514 Apr 28, 2026
791e46f
recommend adversarial review from another LLM
jyn514 Apr 28, 2026
8520038
markdown formatting
jyn514 Apr 28, 2026
b14e8ca
more markdown formatting
jyn514 Apr 28, 2026
69b6dc1
Note that explicitly marking LLM content is ok
jyn514 May 13, 2026
d682475
Exempt t-security-response from a few requirements
jyn514 May 13, 2026
ea4e504
Make "solicited" even stricter
jyn514 May 13, 2026
7eeecbb
Carve out a space for experimentation
jyn514 May 13, 2026
cd9aecd
Revert "Exempt t-security-response from a few requirements"
jyn514 May 13, 2026
ee4f26c
address a few of TC's concerns
jyn514 May 14, 2026
7956574
address a few of Jack's concerns
jyn514 May 14, 2026
b88855a
remove the 'additional scrutiny' examples
jyn514 May 14, 2026
4305e14
move 'using an llm to discover bugs' to the caveats section, without …
jyn514 May 14, 2026
ab6f8a4
move LLM-authored code to its own section; add a zulip stream as policy
jyn514 May 14, 2026
d9d8238
add a section about staying on-topic
jyn514 May 14, 2026
83b9363
wording
jyn514 May 14, 2026
9efffad
relax moderation policy guidelines
jyn514 May 14, 2026
24f236c
ban llms from writing safety comments
jyn514 May 15, 2026
f85aac6
Group some "personal use" bullets together
jyn514 May 15, 2026
adfc5e2
remove "by a team" phrase
jyn514 May 15, 2026
014cf85
clarify wording on harrassment policy
jyn514 May 15, 2026
db8fdae
t-lang didn't get a vote, so narrow the policy not to apply to them
jyn514 May 15, 2026
196cf63
clarify wording on modification policy
jyn514 May 15, 2026
6374d57
clarify wording on "be honest" policy
jyn514 May 15, 2026
8a1ce25
s/authored/created/g
jyn514 May 15, 2026
742d9f4
spruce up "spirit of the law" section
jyn514 May 15, 2026
235432b
wording
jyn514 May 16, 2026
9396306
further clarify moderation section
jyn514 May 16, 2026
33b1407
Add a "scope" section
jyn514 May 16, 2026
08a6b17
add back "better, not faster" quote
jyn514 May 16, 2026
3740dc5
wording
jyn514 May 16, 2026
fb37c69
delete confusing review bot sentence
jyn514 May 16, 2026
8444f54
add link to CoC
jyn514 May 16, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@
- [Project groups](./governance/project-groups.md)
- [Policies](./policies/index.md)
- [Crate ownership policy](./policies/crate-ownership.md)
- [LLM usage policy](./policies/llm-usage.md)
- [Infrastructure](./infra/index.md)
- [Other Installation Methods](./infra/other-installation-methods.md)
- [Archive of Rust Stable Standalone Installers](./infra/archive-stable-version-installers.md)
Expand Down
2 changes: 2 additions & 0 deletions src/how-to-start-contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,8 @@ To achieve this goal, we want to build trust and respect of each other's time an
- Please respect the reviewers' time: allow some days between reviews, only ask for reviews when your code compiles and tests pass, or give an explanation for why you are asking for a review at that stage (you can keep them in draft state until they're ready for review)
- Try to keep comments concise, don't worry about a perfect written communication. Strive for clarity and being to the point

See also our [LLM usage policy](./policies/llm-usage.md).

[^1]: Free-Open Source Project, see: https://en.wikipedia.org/wiki/Free_and_open-source_software

### Different kinds of contributions
Expand Down
224 changes: 224 additions & 0 deletions src/policies/llm-usage.md
Comment thread
oli-obk marked this conversation as resolved.
Copy link
Copy Markdown

@clarfonthey clarfonthey May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Threaded to push the discussion into threads.

I concede that getting out a policy soon is an incredible gain and find it outright disrespectful for people to stonewall the discussion on pedantic grounds. More examples show up every day of where we really need a policy to decide what to do at least on rust-lang/rust and the longer we spend bickering, the more frustrating things get.

On that note, I am tired of people pretending that the omission of ethics from the discussion is anything but a massive concession here to please people who are so childish they refuse to discuss the absolute atrocities committed by the people who make the tools they love.

Sorry, but grow up.

If you think that the benefits of these tools is justified, own it. Let us know you've used them so we can review them accurately. If you don't want to admit that you used them because you're worried people might judge you for making a potentially negative ethical decision, then are you conceding that the discussion should include ethics? Because you can't simultaneously request that ethics not be used to block your use of these tools, while requesting that people not judge you for using them because of ethics. Either ethics is on the table and we fully account for that, or ethics are not on the table and you have to deal with the consequences of that. You can't have both.

As I've alluded to in other policy discussions, there are only two valid solutions to people being judged for LLM usage:

  1. Don't use LLMs, so they don't get judged
  2. Use LLMs, and assert that the judgements are invalid

Don't just hide and pretend that people aren't allowed to have opinions of you. Because while we certainly can ask people to avoid discussing it in threads, we absolutely cannot prevent people from having opinions. There are no thought crimes in the rust-lang org. People can and will have feelings about what you do and it's even in our fucking COC:

Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

You can't hide from opinions. So stop trying to influence policy because you're worried about others' opinions of you; own those opinions, and either call them out as wrong or accept them as right and change your actions.

I'm tired of people dancing around the issue of disclosure with vague concepts like "but people might dislike me uwu," grow the fuck up. We ask you to use these tools because they have unprecedented effects on software development and it is so much easier to just ask everyone to state whether they're used than come up with some vague guidelines for why it's okay to not mention it that people will have no idea how to follow. If you're worried about the judgements from people finding out that you're using it, that's a you problem. This isn't some vague "overstepping;" just like how it's not overstepping to ask people what version of Rust they're using when filing bugs, it's not overstepping to ask people to just tell us when they're using an LLM in an explicitly numerated list of cases.

As always, various shortcomings of the policy can and should be improved. For example, clarifying where submodules stand relative to this policy is important and good. Nitpicking about when you have to disclose stuff is not good. Because I'm assuming that everyone discussing here is competent, I'm assuming that you're not stupid and know exactly what you're arguing when you ask to not disclose LLM usage. I'm asking you to either own what you're doing or give it up rather than weaken the effectiveness of the policy.

Similarly, for the folks saying that the omission of ethics is bad for this policy, I ask you to kindly leave that out from this discussion and take it into my RFC instead.

Thanks.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand you're frustrated, but I don't think this kind of language helps anyone.

About your point: you explicitly state that people are allowed to have their own opinion, but apparently they are not allowed, per you, to have an opinion on whether the discussion of ethics is relevant here.

And for the record, I don't use LLMs and I dislike them for many reasons. But I also think that setting a policy on the basis of the ethical consequences of them is not what we should do.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The missing link to said RFC: rust-lang/rfcs#3959

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not the one who decided to omit ethics from the discussion here; I'm respecting the opinions of the author on that, and conceding that it's fair to get a simpler policy out sooner in this particular case. There are allowed to be concurrent policy decisions and this policy's opinion is that focusing entirely on pragmatic labelling of LLM contributions, alongside calling out certain problematic usages, is the best way to get a policy out sooner.

Is that the best choice? No idea! But I respect the author and their wishes and have an explicit alternative that will almost certainly take longer to figure out. So, instead of bothering jyn about it, I'm inviting the discussion to happen there instead.

Original file line number Diff line number Diff line change
@@ -0,0 +1,224 @@
## LLM Usage Policy

For additional information about the policy itself, see [the appendix](#appendix).

### Overview
Comment thread
nikomatsakis marked this conversation as resolved.

Using LLMs while working on `rust-lang/rust` is conditionally allowed, when done with care.
LLMs are not a substitute for thought,
and we do not allow them to be used in ways that risk losing our shared social and technical understanding of the project,
nor in ways that hurt our goals of creating a strong community.

The policy's guidelines are roughly as follows:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
Comment thread
jyn514 marked this conversation as resolved.

> LLMs work best when used as a tool to write *better*, not *faster*.

> We carve out a space for "experimentation" to inform future revisions to this policy.

### Rules
#### Legend
- ✅ Allowed
- ❌ Banned
- ⚠️ Allowed with caveats. Must disclose that an LLM was used.
Comment thread
jyn514 marked this conversation as resolved.
- ℹ️ Adds additional detail to the policy. These bullets are normative.

#### ✅ Allowed
The following are allowed.
- Any use of an LLM where you are the only one who sees the output. For example:
- Asking an LLM questions about an existing codebase.
- Asking an LLM to summarize comments on an issue or PR.
- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
- Asking an LLM to privately review your code or writing.
- ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below.
- Writing dev-tools for your own personal use using an LLM.
- Using an LLM to generate possible solutions to an issue, learning from them, and then writing something from scratch in your own style.
- Using an LLM in the creation of experimental code changes that are not meant to be reviewed and will never be merged but must live as draft PRs on `rust-lang/rust` for tooling reasons, such as to run crater or perf.

Comment thread
traviscross marked this conversation as resolved.
Comment thread
traviscross marked this conversation as resolved.
Comment thread
jyn514 marked this conversation as resolved.
#### ❌ Banned
The following are banned.
- Comments from a personal user account that are originally created by an LLM.
- ℹ️ This also applies to issue bodies and PR descriptions.
Comment thread
traviscross marked this conversation as resolved.
- ℹ️ This does not apply if the LLM content is clearly quoted and marked, you can post that.
However, the content of the comment must stand on its own even without the LLM content; it's not a substitute for your own words.
- ℹ️ See also "machine-translation" in ⚠️ below.
- ℹ️ See also "Scope" in the appendix below.
- Documentation that is originally created by an LLM.
- ℹ️ This includes non-trivial source comments, such as doc-comments, safety comments, or multiple paragraphs of non-doc-comments.
- ℹ️ This includes compiler diagnostics.
Comment thread
jyn514 marked this conversation as resolved.
LLMs are conditionally allowed to assist with the *logic* surrounding a diagnostic (see "code changes" under ⚠️ below),
but they must not be used to create the message itself.
- Treating an LLM review as a sufficient condition to merge or reject a change.
LLM reviews, if enabled, **must** be advisory-only.
Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person,
but they may not have a policy that an LLM review substitutes for a human review.
- ℹ️ See "review bots" in ⚠️ below.
- ℹ️ An LLM review does not substitute for self-review. Authors are expected to review their own code before posting and after each change.

#### ⚠️ Allowed with caveats
The following are decided on a case-by-case basis.
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.

- Using machine-translation (e.g. Google Translate) from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- "Trivial" code changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html).
- ℹ️ Be cautious about PRs that consist solely of trivial changes.
See also [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation).
- Using an LLM to discover bugs, as long as you personally verify the bug.
Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines).
- ℹ️ This also includes reviewers who use LLMs to discover flaws in unmerged code.
- ℹ️ See also "Comments from a personal user account" under ❌ above.
- Using an LLM as a "review bot" for PRs.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm OOTL but I find this section situationally strange — where did the "review bot" come from?

IME AI-powered review bots that directly participates in PR discussions (esp the "app" ones) are configured by repository owner, but AFAIK r-l/r (which this policy applies solely to) did not have any such bots. I highly doubt a contributor will bring in their own review bot in public. So practically this has to be either

  • someone requested a review from Copilot, which may be we can opt-out?
  • the reviewer outsourced the review work to a coding agent, which is already covered in the sections
  • at least one team actually considered enabling such review bots in the future? as this is linked previously in that "Teams can have a policy that code can be merged without review" part, but I don't think this will ever happen given the the stance of this policy

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I highly doubt a contributor will bring in their own review bot in public.

I wish it worked like that :( People can just trigger GitHub copilot, or I suppose any other review bot, and let it comment on a r-l/r PR. Some people don't even do it willingly, but GH does it automatically for them, as GH copilot has a tendency to re-enable itself even if you sometimes disable it.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Yeah currently disabling review is a personal/license-owner setting, it is not possible to configure from the repository PoV 😞 but I think this is something that we may bring up to GitHub.

It may be possible to use content exclusion to blind Copilot, but I'm not sure if this hack is going to produce any overreaching effects (e.g. affecting private IDE usage too).

Copy link
Copy Markdown
Contributor

@apiraino apiraino Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

someone requested a review from Copilot, which may be we can opt-out?

I think this is exactly the point of pointing that out in our policy. Some people trigger a "[at]copilot review" in our repos without asking us for consent. This is rude behaviour and we don't want that.

And, yes, as you point out opting out of this "trigger" is currently only a project-wide setting, not at a repository level so we are looking with GitHub if they could make this setting more fine-grained (here on Zulip a discussion with the Infra team)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@clarfonthey I understand you are frustrated but it doesn't help to take it out on the people we're working with. Can I ask you to take a break from commenting on this RFC for a bit? Feel free to DM me with any concerns you have about the policy itself.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you're right; I deleted the comment

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Unsolicited review bots are becoming an increasing problem; for example: https://web.archive.org/web/20260426133344/https://github.com/rust-lang/rust-clippy/issues/16893#issuecomment-4321880160

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for flagging xtqqczze - the same bot has commented in 6+ issues on the rust-clippy repo and in my case was giving unsolicited advice in a completely derailing direction (solving a specific case I obviously already worked around rather than the general case rust-lang/rust-clippy#16901 (comment))

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xtqqczze both rust-lang/rust-clippy#16893 and rust-lang/rust-clippy#16901 are issues not PRs, and that @QEEK-AI account commented spontaneously without any summoning. So I don't think these instances fall under this "Review Bot" rule (which is still "⚠️ Allowed with caveats"). At the very least these are "Comments […] authored by an LLM" which is "❌ Banned", and they are also outright "spam" that the current CoC can already handle.

- ℹ️ Review bots that post without being approved by a maintainer will be banned.
- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM.
You **must not** post (or allow a tool to post) LLM reviews verbatim on your personal account unless clearly quoted with your own personal interpretation of the bot's analysis.
- ℹ️ Review bot accounts must be blockable by individual users via the standard GitHub user-blocking mechanism. (Note that some GitHub "app" accounts post comments that look like users but cannot be blocked.)
- ℹ️ If a more reliable tool, such as a linter or formatter, already exists for the language you're writing, we strongly suggest using that tool instead of or in addition to the LLM.
- ℹ️ Configure LLM review tools to reduce false positives and excessive focus on trivialities, as these are common, exhausting failure modes.
- ℹ️ LLM comments **must not** be blocking; reviewers must indicate which comments they want addressed.
- In other words, reviewers must explicitly endorse an LLM comment before blocking a PR. They are responsible for their own analysis of the LLM's comment and cannot treat it as a CI failure.
- ℹ️ This does not apply to private use of an LLM for reviews; see ✅ above.

All uses under "⚠️ Allowed with caveats" **must** disclose that an LLM was used.

#### Experiment: LLM-created code changes
Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally created by an LLM are allowed, with disclosure.
1. "Solicited" means that a reviewer has communicated *ahead of time* that they are willing to review an LLM-created PR.
- ℹ️ New contributors cannot use an LLM unless they first talk with a reviewer.
This must be the *same* reviewer who will be assigned to the PR.
2. "Non-critical" means that it is extremely unlikely for the PR to cause a [soundness](https://jacko.io/safety_and_soundness.html) regression.
- ℹ️ Examples:
- Changes to internal tooling like `tidy`, `x setup`, and `linkchecker` are probably ok.
- Changes that have a strong soundness impact, like the trait system, MIR building, or the query system are probably not ok.
3. "High-quality" means that it is held to at least the same standard as other code changes.
Everyone reads code, not just the author and reviewer;
we are not interested in "vibe-coded" PRs that degrade the quality of the codebase.
4. "Well-tested" means that you have covered all edge-cases that either you or the reviewer can think of.
- ℹ️ LLM-created PRs will be held to a higher standard than human-created PRs, because LLMs make it easier to write tests.
- ℹ️ If there is no existing test suite for a section of code, you must either write a new test suite or close the PR.
There are no exceptions for "writing the tests seems hard".
5. "Well-reviewed" means the author and reviewer both commit to fully understanding the code.
- ℹ️ All review requirements in [our existing review policy](../compiler/reviews.md#basic-reviewing-requirements) still apply.
- ℹ️ A review from a project member does not substitute for self-review.
Authors are expected to review their own code before posting and after each change.
- ℹ️ We recommend, but do not require, using a second LLM for adversarial local review before publishing your changes.

LLM-created PRs must be tagged with a new `ai-assisted` label.
All such PRs will be posted to a new (private) Zulip channel, which will be accessible to all members of the `rust-lang` organization.
The goal of the channel is *not* to act as an additional gate-keeper on LLM-created PRs.
Instead, it's to collect information about *whether this experiment is working*:
Are people doing interesting and useful things with LLMs? Are they learning? Are they making repeat contributions?

Because the new channel is private, it will have higher-than-normal standards for what counts as on-topic.
For example, the following are on-topic:
- Whether a PR meets the criteria for the experiment exception
- Whether a PR follows the policy in general

And the following are off-topic:
- Technical and design discussions. These should be posted directly on the PR or in a public Zulip channel.
- Discussions about effort, communication style, or intent
- General discussions about the LLM policy
## Appendix
### Scope

This policy only applies to `rust-lang/rust`, and only to the teams that have ratified it: compiler, libs, types, rustdoc, bootstrap, and their subteams.
The following are not in scope and are free to set their own policies:
- Other repositories in `rust-lang`
- Submodules, subtrees, and crates.io dependencies
- Teams that have not ratified the policy, such as lang and edition

For example, the following do not fall under the policy:
- Tracking issues for T-lang
- T-lang proposals
- Stabilization reports
- Language documentation
- The style guide
- Names of compiler lints. This only applies to the names themselves; the diagnostic messages are still covered under this policy.
- Direct quotes from any of the above in documentation or diagnostics.

### Motivation and guiding principles

There is not a consensus within the Rust project—and likely never will be—about when/how/where it is acceptable to use AI-based tools.
Many members of the Rust project and community find value in AI;
many others feel that its negative impact on society and the climate are severe enough that no use is acceptable.
Still others are working out their opinion.

Despite these differences, there are many common goals we all share:

- Building a community of deep experts in our collective projects.
- Building an inclusive community where all feel welcome and respected.

To achieve those goals, this policy is designed with the following points in mind:

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- LLMs are a new technology, and we are still learning how to use, moderate, and improve them.
Since we're still learning, we have chosen an intentionally conservative policy that lets us maintain the standard of quality that Rust is known for;
but leave space open to experiment with LLMs to inform future policies.


### Moderation policy
#### It's not your job to play detective
["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/).
Don't try to be the police for whether someone has used an LLM.
If it's clear they've broken the rules, point them to this policy;
if it's borderline, [report it to the mods](https://rust-lang.org/policies/code-of-conduct/) and move on.
You are not required to "actively look" for whether an LLM was involved.

Reporting to moderation is not intended to be a penalty.
The mod team is interested in seeing non-violations as well as violations.
As always, the mod team is free to exercise their own judgement and discretion.

#### Be honest
Conversely, lying about whether you've used an LLM, or attempting to hide the extent of the use, is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
If you are not sure where something you would like to do falls in this policy, please talk to the [moderation team](mailto:rust-mods@rust-lang.org).
Don't try to hide it.

#### Penalties
The policies marked with a 🔨 follow the same guidelines as the code of conduct:
Violations will first result in a warning, and repeated violations may result in a ban.
- 🔨 Violations of the "Be honest" section

Other violations are left up to the discretion of reviewers and moderators.
For minor violations we recommend telling the author that we can't review the PR until it complies with the policy, with pointers to exactly what they need to do.
For major violations or extractive PRs, we recommend closing the PR or issue.

It is **not** ok to harrass a contributor for using an LLM.
All contributors must be treated with respect.
The code-of-conduct applies to *all* conversations in the Rust project.

### Responsibility

Your contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally created by an LLM. See "review bots" under ⚠️ above.

### The meaning of "originally created"

This document uses the phrase "originally created" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change how it was originally created; how it was generated sets the initial style and it's very hard to change once it's set.
Comment on lines +198 to +201
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice that the policy does not specifically mention LLM-backed autocompletion anywhere. I suspect this may lead to confusion later. Editors are now willing to fill in entire functions and files (and safety comments, other documentation, etc.), but people seem to mentally bucket this in a different category.

Given the intent of the policy, I'd expect it might want to say something along the lines of:

This includes use of editor autocompletion when that feature is implemented using an LLM (check the documentation). Once you accept a nontrivial autocompletion, the work is "originally created" by an LLM and no amount of editing (other than deleting it entirely) can remove that.

I'm not sure where you'd want to put this.

View changes since the review


For more background about analogous reasoning, see ["What Colour are your bits?"](https://ansuz.sooke.bc.ca/entry/23)

### Non-exhaustive policy
Comment thread
jyn514 marked this conversation as resolved.

This policy does not aim to be exhaustive.
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Using an LLM for your own personal use is likely allowed ✅
- Showing LLM output to another human without solicitation is likely banned ❌
- Making a decision based on LLM output requires disclosure ⚠️

### Conditions for modification or dissolution
This policy is not set in stone, and we can evolve it as we gain more experience working with LLMs.

Minor changes, such as typo fixes, only require a normal PR approval.
Major changes, such as adding a new rule or cancelling an existing rule, require:
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.

This policy can be dissolved in a few ways:

- An accepted FCP by teams using rust-lang/rust.
Comment on lines +218 to +223
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust.
- A simple majority of members of teams using rust-lang/rust and that have ratified the policy.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust and that have ratified the policy.

Something like this might help in being more clear about which teams need to be involved in these later actions.

View changes since the review

- An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence, as decided by a leadership council FCP.
Comment thread
jyn514 marked this conversation as resolved.
Loading