![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://awful.systems/pictrs/image/8651f454-1f76-42f4-bb27-4c64b332f07a.png)
A bunch of scientific papers are probably better data than a bunch of Reddit posts and it’s still not good enough.
Consider the task we’re asking the AI to do. If you want a human to be able to correctly answer questions across a wide array of scientific fields you can’t just hand them all the science papers and expect them to be able to understand it. Even if we restrict it to a single narrow field of research we expect that person to have a insane levels of education. We’re talking 12 years of primary education, 4 years as an undergraduate and 4 more years doing their PhD, and that’s at the low end. During all that time the human is constantly ingesting data through their senses and they’re getting constant training in the form of feedback.
All the scientific papers in the world don’t even come close to an education like that, when it comes to data quality.
It certainly seems like it should be easy to do. Try an example. How would you go about defining safe vs unsafe outcomes for knife handling? Since we can’t guess what the user will ask about ahead of time, the definition needs to apply in all situations that involve knives; eating, cooking, wood carving, box cutting, self defense, surgery, juggling, and any number of activities that I may not have though about yet.
Since we don’t know who will ask about it we also need to be correct for every type of user. The instructions should be safe for toddlers, adults, the elderly, knife experts, people who have never held a knife before. We also need to consider every type of knife. Folding knives, serrated knives, sharp knives, dull knives, long, short, etc.
When we try those sort of safety rules with humans (eg many venues have a sign that instructs people to “be kind” or “don’t be stupid”) they mostly work until we inevitably run into the people who argue about what that means.