To improve AI ethics, Microsoft restricts access to facial recognition capabilities
Microsoft has announced that it is updating its AI ethical guidelines and will no longer permit businesses to use its technology to make assumptions about a person’s gender or age using facial recognition.
As part of its new “responsible AI standard”, Microsoft says it wants to “put people and their goals at the center of system design decisions”. According to the corporation, some features may be updated and others will no longer be available for purchase, but high-level principles will result in genuine improvements in practice.
For instance, organizations like Uber employ Microsoft’s Azure Face Service, a facial recognition service, as part of their identity verification procedures. Now, any company that wants to use the service’s facial recognition features will need to actively request for usage to prove they have already incorporated it into their goods. that the features are advantageous to the user and society and adhere to Microsoft’s AI ethics standards.
According to Microsoft, certain of Azure Face’s most contentious capabilities will no longer be available to firms that have been allowed access, and the corporation will be retiring the face analysis technology that takes into account emotional states. It predicts variables such as gender or age.
Microsoft product manager Sarah Bird said, “We have worked with internal and external researchers to understand the constraints and possible advantages of this technology and navigate the tradeoffs. “These attempts generated serious concerns about anonymity, a lack of agreement on the definition of “emotions,” and the inability to generalize the correlation between facial expression and emotional state across use cases, particularly in the case of emotion classification.” “
Microsoft isn’t completely doing away with emotion recognition; the corporation will continue to employ it for its own internal accessibility products like Seeing AI, which aims to audibly describe the world to individuals with visual issues.
Similar restrictions apply to the company’s proprietary neural voice technology, which enables the production of synthetic voices that are very similar to those of their original sources. According to Natasha Crampton, the company’s chief responsible AI officer, “it’s simple to conceive how this may be used to wrongly imitate speakers and trick listeners.”
Earlier this year Microsoft began watermarking its synthetic voices, which featured slight, inaudible changes in the output, which meant the business could know when a recording was generated using its technology. The possibility of hazardous deepfakes increases with the development of neural TTS technology, which cannot tell the difference between synthetic speech and human voices.