Today, Salesforce revealed the key principles to apply when developing a trustworthy AI.
The company emphasized that, according to Terry Nicol, Vice President for the Middle East and North Africa, it constantly strives to incorporate ethical principles and controls into all its products with the aim of helping customers innovate and proactively address potential challenges. , thus continuing all his innovative methodologies.
Given the huge opportunities and challenges emerging in this field, (Terry Nicol) said that the new policies represent a step forward from the company's previous policy of developing reliable artificial intelligence, which emphasized the development of smart technologies responsible for production and production. Noting that these principles are still a work in progress to catch up with the early steps in transformative AI technologies, he reiterated the company's commitment to learning, drawing from experience, and collaborating with others to develop a solution plan.
Here are five principles for ensuring AI reliability:
Precision
Deliver auditable AI results that balance model accuracy, efficiency, and yield by allowing customers to train models on their own data.
In cases where there is uncertainty about the integrity of AI responses, communication channels should be open to all parties and users should be able to verify these responses themselves.
This can be done by citing the source and justifying the justification for the responses provided by the AI, for example by tracking all the justification steps required to arrive at an answer, and highlighting aspects that need double checking, such as z statistics, recommendations and deadlines. , and releases a control that prevents certain operations from being executed completely automatically, for example b- running the program in a production environment without human supervision.
safety
As with all AI models, every effort should be made to mitigate bias, abusive language, and adverse outcomes by assessing bias, response triggers, and strength of the model, while allowing for unbiased testing to explore and discover weaknesses.
The confidentiality of any personally identifiable information contained in the data used in training must also be protected, with appropriate controls in place to prevent further harm, for example forcing software to deploy in an isolated environment rather than automatically entering production.
honesty
You must respect the source of the data and ensure that consent is obtained when using the data, eg: open sources or user sources. Be transparent that the content is AI generated when it is self-delivered ie b. Responses from consumer virtual assistants and the use of unique hashtags.
empowerment
A distinction must be made between situations that require fully automated operation and other situations in which AI needs to play a supporting role for humans.
The optimal balance must be found between the desire to support human skills as much as possible and to provide solutions in this field within the reach of all, for example: creating an alternative text to identify the content of the image for those who see it but cannot see it.
sustainability
Along with working to create more accurate models, it is essential to develop AI models as large as possible to reduce the carbon footprint. In the context of this intelligent model, bigger does not always mean better, because in some cases a smaller, well-trained model will outperform a larger, less trained model.