The New York Times blocks OpenAI’s web crawler::The New York Times has officially blocked GPTBot, OpenAI’s web crawler. The outlet’s robot.txt page specifically disallows GPTBot, preventing OpenAI from scraping content from its website to train AI models.
This goes against everything that the NYT preaches in terms of saying that the press is under attack and needs to be protected. AI consumption of news content makes the news more accessible. Their paid articles don’t overlap with what ChatGPT is doing. This is really a bunch of old people getting butt hurt about tech they don’t fully understand.
While I am no fan of the NYT and other news site’s pricing models, I don’t think that this goes against “protecting the press”. Journalists do a job. They research, compile, draft, and write articles in their own voice (or the voice of the news outlet). They are paid for this work. OpenAI wants to scrape the words off news sites so that their language model can regurgitate them for free.
This is the AI Art thing all over again. Creators should be paid for their work.
deleted by creator
Maybe you are not thinking about the capabilities of AI fully there are ones that are enriched with recent data, so your can ask it about recent events. Also, I do ask it about historical information, so it is nice to have that available.
Please don’t tell me you get your news from LLMs.
If journalists and their platforms do not get paid their articles won’t get written. So no, the free absorbtion of professional articles into a LLM that uses the article to answer a Pokemon question online in 6 months is not making “news” more “accesible”.
It’s moreso an archive of historical knowledge. Thinking it just answers Pokémon questions is shortsighted.