LONDON – Following revelations that a second racial slur was discreetly edited out of the BAFTA Film Awards broadcast, the British Broadcasting Corporation (BBC) today unveiled 'Slur-B-Gone 3000,' an advanced artificial intelligence system designed to identify and neutralize potentially offensive language in real-time, or even before it is uttered.

The new AI, developed by the BBC's newly formed 'Department of Pre-Emptive Linguistic Sanitization,' will reportedly scan live feeds for a rapidly expanding lexicon of forbidden phrases, replacing them with 'culturally neutral tones' or, if necessary, brief moments of 'enlightened silence.' Chief Content Officer Kate Phillips, speaking from behind a newly installed soundproof screen, stated, 'We are committed to delivering content that is not just compliant, but proactively inoffensive. Sometimes, the most powerful statement is no statement at all.'

Critics, however, questioned the efficacy of such a system. Dr. Penelope Wiffle, Head of Obtuse Semiotics at the University of Greater Brentford-Upon-Thames, commented, 'While noble in intent, this technology risks creating a broadcast landscape so sterile, viewers will mistake it for a particularly dull episode of 'Antiques Roadshow' where nothing is found and everyone speaks in hushed tones about the weather.'

Sources within the BBC, speaking anonymously from a designated 'safe word' zone, indicated that early tests of Slur-B-Gone 3000 accidentally censored an entire acceptance speech after the AI mistook the word 'brilliant' for a derogatory term in an obscure dialect of ancient Aramaic. The speaker's subsequent 'thank you' was reportedly replaced with the sound of a gentle harp.