Microsoft Bans U.S. Police Departments from Using Enterprise AI Tool
Microsoft has prohibited police forces in the United States from obtaining generative AI services running on Azure OpenAI Service.
This is a fresh approach addressed in the upgraded terms of service on Wednesday and it aims to answer the growing ethical dilemma facing AI in law enforcement.
The revised terms expressly provide that such integrations cannot be used “by or for” police agencies in the United States.
This restriction incorporates the text- and speech-analyzing models too, underlining Microsoft’s emphasis on responsible AI use.
As an additional regulation, a separate clause has been introduced to specifically forbid the use of real-time facial recognition technology on mobile cameras, including body cameras and dashcams, in uncontrolled environments.
The generated fake information and its bias in racial prejudice in training data were addressed to show the concerns of some critics at once.
Such a balanced methodology complies well with Microsoft’s broad AI strategy on law enforcement and defense
Such partnerships indicate a change of stance for the OpenAI as for Microsoft which is searching for AI applications in military technologies.
The move by Microsoft to stop the departments of police in the U.S. from using Azure OpenAI Service for particular scenarios is evidence of the deliberate and focused approach the company is taking to deal with AI deployment ethical concerns.