Medical large language models are vulnerable to data-poisoning attacks
Nature Medicine – Large language models can be manipulated to generate misinformation by poisoning of a very small percentage of the data on which they are trained, but a harm mitigation strategy…