Reinforcement Learning Human Feedback (RLHF)A way of training AI by giving it advice on how to improve in the futureJanuary 1, 0001