|
530 | 530 | "href": "llm_use_guidelines.html", |
531 | 531 | "title": "Appendix H — LLM use guidelines for research trainees", |
532 | 532 | "section": "", |
533 | | - "text": "H.1 Core principle\nEnhancing learning without compromising training potential.\nA brief, meta compilation of LLM guidelines using an LLM!\nYour primary goal in training is developing independent scientific thinking, not maximizing efficiency. LLMs are tools that should amplify your capabilities, not replace the intellectual work that builds expertise.", |
| 533 | + "text": "H.1 Core principle\nEnhancing learning without compromising training potential.\nA brief, meta compilation of LLM guidelines using an LLM!\nYour primary goal in training is to develop independent scientific thinking, not to maximize efficiency. LLMs are tools that should amplify your capabilities, not replace the intellectual work that builds expertise.", |
534 | 534 | "crumbs": [ |
535 | 535 | "Appendices", |
536 | 536 | "<span class='chapter-number'>H</span> <span class='chapter-title'>LLM use guidelines for research trainees</span>" |
|
541 | 541 | "href": "llm_use_guidelines.html#strategic-uses-enhance-learning", |
542 | 542 | "title": "Appendix H — LLM use guidelines for research trainees", |
543 | 543 | "section": "H.2 ✓ Strategic uses (enhance learning)", |
544 | | - "text": "H.2 ✓ Strategic uses (enhance learning)\n\nH.2.1 Documentation & organization\n\nClean up (not draft) READMEs, code comments, inline documentation\nOrganize project directories and file structures\nCreate documentation templates for GitHub repos, datasets\nGenerate boilerplate code structure (after understanding fundamentals)\n\n\n\nH.2.2 Learning & skill development\n\nAfter independent attempts: Get explanations of complex concepts\nGenerate analogies to understand difficult topics\nBrainstorm approaches to problems (but verify with literature/experts)\nUse tools like Perplexity to generate reading lists (always check for predatory journals, hallucinated citations)\nAsk “what topics do I need to know to understand this paper/method?”\n\n\n\nH.2.3 Critique & gap analysis\n\nGet feedback on drafts you’ve already written (after ≥2 revision rounds w/ colleagues)\nIdentify logical gaps, clarity issues, or missing considerations\nCheck for completeness in project proposals or documentation\nRequest alternative perspectives on your interpretations\n\n\n\nH.2.4 Code assistance (after mastery)\n\nDebug assistance when you’ve already diagnosed the problem area\nSyntax help for languages you already understand\nCode refactoring suggestions (when you understand the tradeoffs)\nStandard visualization templates (after learning plotting fundamentals)\n\n\n\nH.2.5 Communication practice\n\nVoice mode for talk practice: Deliver presentations, get feedback on flow, narrative, pacing, clarity\nPractice for comprehensive exams or conference talks\nGet suggestions for improving scientific communication style\nPolish grammar and style (like an advanced Grammarly)\n\n\n\nH.2.6 Literature search\n\nGenerate lists of related papers to explore (verify all exist)\nFind connections between research areas\nIdentify key terminology and concepts in new fields", |
| 544 | + "text": "H.2 ✓ Strategic uses (enhance learning)\n\nH.2.1 Documentation & organization\n\nClean up (not draft) READMEs, code comments, and inline documentation\nOrganize project directories and file structures\nCreate documentation templates for GitHub repos, datasets\nGenerate boilerplate code structure (after understanding fundamentals)\n\n\n\nH.2.2 Learning & skill development\n\nAfter independent attempts: Get explanations of complex concepts\nGenerate analogies to understand difficult topics\nBrainstorm approaches to problems (but verify with literature/experts)\nUse tools like Perplexity to generate reading lists (always check for predatory journals, hallucinated citations)\nAsk “what topics do I need to know to understand this paper/method?”\n\n\n\nH.2.3 Critique & gap analysis\n\nGet feedback on drafts you’ve already written (after ≥2 revision rounds with colleagues)\nIdentify logical gaps, clarity issues, or missing considerations\nCheck for completeness in project proposals or documentation\nRequest alternative perspectives on your interpretations\n\n\n\nH.2.4 Code assistance (after mastery)\n\nDebug assistance when you’ve already diagnosed the problem area\nSyntax help for languages you already understand\nCode refactoring suggestions (when you understand the tradeoffs)\nStandard visualization templates (after learning plotting fundamentals)\n\n\n\nH.2.5 Communication practice\n\nVoice mode for talk practice: Deliver presentations, get feedback on flow, narrative, pacing, clarity\nPractice for comprehensive exams or conference talks\nGet suggestions for improving scientific communication style\nPolish grammar and style (like an advanced Grammarly)\n\n\n\nH.2.6 Literature search\n\nGenerate lists of related papers to explore (verify all exist)\nFind connections between research areas\nIdentify key terminology and concepts in new fields", |
545 | 545 | "crumbs": [ |
546 | 546 | "Appendices", |
547 | 547 | "<span class='chapter-number'>H</span> <span class='chapter-title'>LLM use guidelines for research trainees</span>" |
|
552 | 552 | "href": "llm_use_guidelines.html#avoid-compromises-training", |
553 | 553 | "title": "Appendix H — LLM use guidelines for research trainees", |
554 | 554 | "section": "H.3 ✗ Avoid (compromises training)", |
555 | | - "text": "H.3 ✗ Avoid (compromises training)\n\nH.3.1 Writing & thinking\n\n❌ Having LLMs write any first draft (manuscripts, proposals, abstracts, reports)\n❌ Generating content from bullet points without writing yourself\n❌ Summarizing your own results or data interpretations\n❌ Writing discussion/conclusion sections\n❌ Any writing task you’ve done <10-20 times independently\n\n\n\nH.3.2 Code & analysis\n\n❌ Generating analysis code for methods you don’t understand\n❌ Writing entire scripts/pipelines without knowing each component\n❌ Using AI for statistical approaches you can’t verify\n❌ Debugging without first attempting to understand the error yourself\n❌ Any coding task you’ve done <5-10 times independently\n\n\n\nH.3.3 Data & results\n\n❌ Uploading raw research data to public LLM systems\n❌ Having AI analyze, visualize, or interpret your experimental/computational data\n❌ Using AI for any task involving sensitive, unpublished, or controlled-access data\n\n\n\nH.3.4 Core scientific skills\n\n❌ Bypassing reading original papers (especially in first 1-3 years)\n❌ Using AI instead of asking labmates/mentors for help\n❌ Generating hypotheses or research questions\n❌ Tasks where discussion with colleagues provides more learning value", |
| 555 | + "text": "H.3 ✗ Avoid (compromises training)\n\nH.3.1 Writing & thinking\n\n❌ Having LLMs write any first draft (manuscripts, proposals, abstracts, reports)\n❌ Generating content from bullet points without writing yourself\n❌ Summarizing your own results or data interpretations\n❌ Writing discussion/conclusion sections\n❌ Any writing task you’ve done <10-20 times independently\n\n\n\nH.3.2 Code & analysis\n\n❌ Generating analysis code for methods you don’t understand\n❌ Writing entire scripts/pipelines without knowing each component\n❌ Using AI for statistical approaches you can’t verify\n❌ Debugging without first attempting to understand the error yourself\n❌ Any coding task you’ve done <5-10 times independently\n\n\n\nH.3.3 Data & results\n\n❌ Uploading raw research data to public LLM systems\n❌ Having AI analyze, visualize, or interpret your experimental/computational data\n❌ Using AI for any task involving sensitive, unpublished, or controlled-access data\n\n\n\nH.3.4 Core scientific skills\n\n❌ Bypassing reading original papers (especially in the first 1-3 years)\n❌ Using AI instead of asking labmates/mentors for help\n❌ Generating hypotheses or research questions\n❌ Tasks where discussion with colleagues provides more learning value", |
556 | 556 | "crumbs": [ |
557 | 557 | "Appendices", |
558 | 558 | "<span class='chapter-number'>H</span> <span class='chapter-title'>LLM use guidelines for research trainees</span>" |
|
563 | 563 | "href": "llm_use_guidelines.html#critical-requirements", |
564 | 564 | "title": "Appendix H — LLM use guidelines for research trainees", |
565 | 565 | "section": "H.4 Critical requirements", |
566 | | - "text": "H.4 Critical requirements\n\nH.4.1 1. Accountability & verification\n\nYou are fully responsible for ALL AI-generated content\nVerification requires expertise - if you can’t verify output correctness, don’t use AI for that task\nFor code: Understand every line, test thoroughly\nFor writing: Fact-check every claim, verify every citation exists\nFor analysis: Verify statistical approaches, check assumptions\n\n\n\nH.4.2 2. Documentation\nWhen you use AI, document:\n\nTool name and version\nDate of use\nPrompts used\nOutput generated\nHow you verified/modified it\nErrors found and corrected\n\n\n\nH.4.3 3. Communication\n\nInform your PI within 1 week of AI use for research tasks\nBe transparent with collaborators before/during work\nNever present AI outputs as your own understanding\n\n\n\nH.4.4 4. Protect sensitive information\n☠️ Never input into public AI systems:\n\nUnpublished results or data\nProprietary datasets or code\nNovel research ideas or hypotheses\nPatient data or controlled-access information\nGrant proposals or manuscript drafts in development", |
| 566 | + "text": "H.4 Critical requirements\n\nH.4.1 1. Accountability & verification\n\nYou are fully responsible for ALL AI-generated content\nVerification requires expertise - if you can’t verify output correctness, don’t use AI for that task\nFor code: Understand every line, test thoroughly\nFor writing: Fact-check every claim, verify every citation exists\nFor analysis: Verify statistical approaches, check assumptions\n\n\n\nH.4.2 2. Documentation\nWhen you use AI, document:\n\nTool name and version\nDate of use\nPrompts used\nOutput generated\nHow you verified/modified the LLM output\nErrors found and corrected\n\n\n\nH.4.3 3. Communication\n\nInform your PI within 1 week of AI use for research tasks\nBe transparent with collaborators before/during work\nNever present AI outputs as your own understanding\n\n\n\nH.4.4 4. Protect sensitive information\n☠️ Never input into public AI systems:\n\nUnpublished results or data\nProprietary datasets or code\nNovel research ideas or hypotheses\nPatient data or controlled-access information\nGrant proposals or manuscript drafts in development", |
567 | 567 | "crumbs": [ |
568 | 568 | "Appendices", |
569 | 569 | "<span class='chapter-number'>H</span> <span class='chapter-title'>LLM use guidelines for research trainees</span>" |
|
596 | 596 | "href": "llm_use_guidelines.html#remember", |
597 | 597 | "title": "Appendix H — LLM use guidelines for research trainees", |
598 | 598 | "section": "H.7 Remember", |
599 | | - "text": "H.7 Remember\n\nSpeed ≠ learning. Efficiency now can mean skill gaps later\nAI outputs look polished but may be wrong. Confidence ≠ correctness\nWhat distinguishes you: Deep thinking, original perspectives, robust foundational skills\nAI often introduces subtle errors that a human would not: these can lead to profound consequences\nHallucinations are real: Always verify citations, facts, and technical claims\nWhen in doubt, ask your PI first\nCheck LLM’s Settings → Privacy/Data controls` and turn off chat history, memory, personalization, and training‑related data usage for maximum privacy and no data retention.\n\n\n\nThe goal isn’t to avoid AI entirely — it’s to use it strategically so you emerge from training as an independent scientist with distinctive capabilities, not someone dependent on tools they can’t verify or correct. You will have ample opportunities as a PI or a scientist in an industry, to learn and use LLMs quickly and efficiently — there’s no need to rush into it now at the cost of your successful training.", |
| 599 | + "text": "H.7 Remember\n\nSpeed ≠ learning. Efficiency now can mean skill gaps later\nAI outputs look polished but may be wrong. Confidence ≠ correctness\nWhat distinguishes you: Deep thinking, original perspectives, robust foundational skills\nAI often introduces subtle errors that a human would not: these can lead to profound consequences\nHallucinations are real: Always verify citations, facts, and technical claims\nWhen in doubt, ask your PI first\nCheck LLM’s Settings → Privacy/Data controls` and turn off chat history, memory, personalization, and training‑related data usage for maximum privacy and no data retention.\n\n\nThe goal isn’t to avoid AI entirely — it’s to use it strategically so you emerge from training as an independent scientist with distinctive capabilities, not someone dependent on tools they can’t verify or correct. You will have ample opportunities as a PI or a scientist in an industry, to learn and use LLMs quickly and efficiently — there’s no need to rush into it now at the cost of your successful training.", |
600 | 600 | "crumbs": [ |
601 | 601 | "Appendices", |
602 | 602 | "<span class='chapter-number'>H</span> <span class='chapter-title'>LLM use guidelines for research trainees</span>" |
|
0 commit comments