



Microsoft launched Azure Content Safety, a new AI service that allegedly creates secure online spaces. It will seek out “inappropriate texts and images” across the Internet.
Do you want Microsoft to determine what you are allowed to say? They will. If it sounds like Maoist censorship, it’s because it is a burgeoning CCP social credit system.
Microsoft, the company just a few months ago, laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.
It detects alleged hateful, violent, sexual, and self-harm content in images and text. It assigns severity scores to get businesses to limit and prioritize for content moderators to review.
It handles nuance and context.
We can imagine. Oh, by the way, you don’t have free speech.
The announcement said it included the following:
Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team and Jigsaw. It succeeds Microsoft’s own Content Moderator tool.