Twitter looks at the risks of machine learning algorithms |
Twitter explained that it is looking into whether its machine learning algorithms are causing unexpected harm to users.
The move comes because social media companies are under increasing scrutiny for their role in spreading conspiracy theories and facilitating harassment.
In the coming months, the responsible machine learning program will publish reports analyzing potential racial and gender biases in its image cropping algorithms.
In addition, reports are produced in which the integrity of the evaluation recommendations for ethnically different families is evaluated and the content recommendations for different political ideas are analyzed in seven countries.
After the company criticized its Twitter image algorithm last year for focusing more on white faces than on black faces, the company claimed that its tests were free of racial or gender bias.
However, after testing, he announced that it would give users more control over the images, knowing that his way of automatically cropping images means potential damage.
Twitter said: The report findings can be helpful in making changes at the platform level, creating new guidelines for designing specific products, and increasing awareness of ethical machine learning.
She added: The use of machine learning affects hundreds of millions of tweets every day, and the system can sometimes behave differently than expected. These minor changes affect users and we want to make sure that we investigate and use these changes. Creating better products.
Twitter's initiative grew out of the responsibility of social media companies to hold their algorithms accountable for increasing polarization, disinformation and extremism online.
In particular, Twitter has been criticized for not doing enough to combat harassment.
CEO Jack Dorsey (Jack Dorsey) previously said: He hopes for a future in which users can choose which algorithm to use through an interface similar to an app store, rather than relying on Twitter's unique algorithm.
Twitter admits that its algorithm could be harmful to users, which is in stark contrast to Facebook's defense of its algorithm.
Facebook said last month: We have no commercial interest in developing extremist content, and the platform is not the only reason for political polarization, as personal choices can also have an impact on what users see in their news feed.