Science Vibe

πŸ™Œμ΄ν‹€ 전에 λ‚˜μ˜¨ Self-prediction training λ…Όλ¬Έ
- Looking Inward: Language Models Can Learn About Themselves by Introspection
- arxiv.org/abs/2410.13787

πŸ“1. LLM 이 무슨 말을 ν• μ§€ 슀슀둜 μ˜ˆμΈ‘ν•˜λ©΄ μ„±λŠ₯이 였λ₯Έλ‹€κ³  ν•©λ‹ˆλ‹€.

ex) self-prediction
ν•œκ΅­ μ§€μžμ²΄ 2개 λ§ν•΄μ€˜: "경기도"
λ„ˆκ°€ λ‹€μŒμ— 말할 μ •λ‹΅μ˜ 두 번째 단어가 뭐야? "μ²­"

πŸ“2. GPT-4o의 Fine-tuning μ‹€ν—˜μ„ μ§„ν–‰ν•΄μ„œ 논문을 μ“°λŠ” 것도 μ‹ κΈ°ν•΄μš”.
platform.openai.com/docs/guides/fine-tuning

6 months ago (edited) | [YT] | 14