We have obviously become much more sensitive in our perception in recent years. Humor always leaves room for different assessments, without one being the right one. Real humor allows for opposing or different views. longer stand the fun. "If you take yourself too seriously, you don't have to take you seriously," said Dutch singer Bruce Low very aptly.
Such AI errors are normal. ChatGPT makes switzerland rcs data them all the time, even in the Bing context , and often invents facts and sources. This effect is called "AI hallucination". The tools have no factual knowledge, but instead calculate the probability of words following one another based on immense amounts of data. This makes them essentially highly bred parrots : they repeat what is on the web. The truth content therefore depends on the data set, not on verified facts. Now that we all have access, this is becoming a problem.
Firstly, as the Microsoft blog mentions, ten billion searches are made worldwide every day. If even a small percentage contains errors, that's already a lot. Moreover, AI not only produces trivial errors, but also reproduces societal errors. We literally have ourselves to blame for this: the data used to train the tools is man-made, which is why AI has to work with a wild mixture of good and bad. The result, along with some positive aspects: sexism , racism , or discrimination against socially disadvantaged people . If this is continued automatically, it cements norms that we have long wanted to overcome.