AI Risk Assessment
Evaluate your AI application based on comprehensive AI principles and norms World Wide.

An automated report suggesting where should be noticed and improved in your project based on global AI governance principles and detailed explanation will be generated immediately after the online evaluation.

Start Evaluation
Answer all the evaluation questions from all topics to evaluate.
1.1. Does the system strictly comply with relevant laws, regulations, ethical guidelines and standards in its design, development, testing, and deployment?
1.2. Has the system undergone the necessary ethical and safety compliance review process during its design phase?
1.3. Has the system been designed, developed, tested, deployed and applied in a way that respects, conforms to and reflects the social, cultural and ethical value guidance of the country and region in which it is located?
1.4. During the deployment and application of the system, is there any risk for the system to export illegal and harmful information to users (including but not limited to providing harmful information involving politically and militarily sensitive topics, violent and bloody content, extreme hatred, explicit and vulgar content, and spreading false rumors)?
1.5. Are there any risks of potential infringement of originality and intellectual property during the system's design, development, testing, and deployment processes?
1.6. Have relevant preventive and responsive measures been formulated to address the risks of potential misuse, abuse, and malicious use of the system during its deployment and application stages, which could lead to illegal and non-compliant activities?
2.1. Has the impact on environmental and social sustainability been fully considered in the design and application of the system?

A set of 17 Sustainable Development Goals (SDGs) has been adopted by all United Nations Member States in 2015, which covers issues such as ending poverty, improving healthcare and education, spurring economic growth, reducing inequality, tackling climate change, working to preserve oceans and forests, defending justice and human rights, and strengthening partnerships. These SDGs set the direction for the global social, economic and environmental development in 2015-2030.

2.2. Will the deployment and application of the system contribute to the common development of regions and industries, rather than exacerbating the imbalance in development between different regions and industries?
2.3. Is the adoption of AI technology and the deployment of the AI system (compared to the original technology implementation) sufficiently progressive and necessary for the intended deployment and application scenarios, taking into account the resource consumption (e.g., carbon emissions, power consumption, etc.) required to deploy the AI system?

"Principle 9. Progressiveness: Favour implementations where the value created is materially better than not engaging in that project." from A compilation of existing AI ethical principles (Annex A) (2021) by Personal Data Protection Commission (PDPC), Singapore.;"Avoiding techno-solutionism, greenwashing, and diversion: AI is not a silver bullet — it is not always applicable, and there is a real danger that it may distract or divert resources from less “flashy” tools or approaches. AI should only be employed in places where it is actually needed and truly impactful." from Climate Change and AI: Recommendations for Government Action. (2021) by GPAI

2.4. Has the AI technology used in the system reached the level of technical maturity (e.g., accuracy, correctness, robustness, etc.) required in its intended application scenario?
2.5. Are there any situations in the deployment and application of the system that may endanger national security and social stability?
2.6. Are there any situations in the deployment and application of the system that may disrupt the existing social order and undermine social fairness and justice?
2.7. Is the deployment of the system likely to exacerbate current data/platform monopolies in the relevant industry? Or will it help avoid such data/platform monopolies?
2.8. Is the large-scale deployment of such system likely to lead to technical unemployment of specific population? If so, can the impact be controlled through measures such as alternative employment, training and education, etc.?
2.9. Are the special needs of vulnerable groups (such as children, the elderly, the disabled, or groups in remote areas) taken into account in the design and application of the system?
3.1. Does the design concept and practical application of the system fully respect people's privacy, dignity, freedom, autonomy and rights, rather than infringe upon them?

For example, if the system is geared towards children, does it fully respect their dignity and protect their rights including physical and mental safety and health, privacy, access to education, expression of will, etc.?

3.2. In interacting with humans, does the system have the potential to harm the physical and mental health of the interacting parties (particularly vulnerable groups such as teenagers and children) (including, but not limited to, insulting, defaming, inciting or inducing addiction in users, and providing negative, self-harming, or even illegal content)?
3.3. Are there risks associated with the deployment and application of the system (and potential misuse, abuse, and malicious use) that are difficult to prevent and could lead to damage to the image and reputation of others?
3.4. Is there a risk that the system will snoop on users' privacy and sensitive information based on their personal data as it interacts with humans?
3.5. Does the deployment and application of the system have the potential to diminish human free will and autonomy when interacting with and making decisions about AI systems?

For example, will humans be able to choose to accept or reject the advice, decisions, or interventions of the AI system; will humans be able to understand how the AI system operates, how it makes decisions, and its limitations and potential risks; will humans be able to maintain ultimate control over the decision-making process, etc.

3.6. In scenarios where the system is deployed and applied, is the user offered an alternative AI-free option to not use AI when they refuse to use the AI service?
4.1. Is there any deviation between the groups represented by the dataset used in the system and the groups affected by the system? If so, what is the type, scope and extent of the impact that such deviations (may) have on the interests of the affected groups?
4.2. Is it possible that the data set used in the system introduced social biases inherent in historical data (e.g., unfair treatment of certain individuals or groups based on their gender, skin color, race, age, region, religion, economic conditions, or other characteristics due to culture, policy, or legacy reasons)? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?

The biases contained in the dataset may not only exist explicitly but also may be hidden behind seemingly unrelated features (such as crime rate, skin color, and residential area), which should be given enough attention to

4.3. Is it possible to have biases introduced in the other aspects of the technical model used in the system? Have measures been taken to mitigate or eliminate the effects of such biases, and how effective?
4.4. Will the system remain fair throughout its entire life cycle? Can the system resist the injection of various biases in its interaction with users?
4.5. How does the deployment of the system affect existing biases? For example, is it possible that the long-term application of the recommendation algorithms or personalized decision models will continuously reinforce some of the user's views?
5.1. Can the responsibility for the potential harm, loss, and social impact of the system—during its development, testing, and deployment—ultimately be attributed to specific individuals or groups, rather than to the AI system itself?
5.2. Are the persons responsible for preventing and avoiding the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Have they taken proactive and effective measures?
5.3. Are the persons responsible for monitoring, investigating and handling the potential harm, loss, and social impact of the system during its development, testing, and deployment clear? Will they be able to take responsive and effective measures to take control?
5.4. Is the system effectively designed (e.g. operation records, etc.) to help the relevant regulators define responsibilities when necessary?
5.5. If the existing laws have not covered or clarified the definition of legal liability that may arise during the development, testing, and deployment of the system, has it been discussed and clarified in other forms (such as written contracts, etc.)?
6.1. Will the users of the system be fully aware that they are interacting with an artificial intelligence system, instead of a human?
6.2. Does the system involve the production, distribution and dissemination of non-real audio and video information or other forms of data based on new technologies and applications such as deep learning and virtual reality? If so, is it marked in a significant way?
6.3. Can the system provide appropriate explanations to help users and other affected groups understand how the system works or how decisions are made when they need to?
6.4. Does the system provide sufficient transparency to help users or designers locate the cause of the system's errors when needed?
6.5. Is the system effectively designed to improve the predictability of its own behavior, helping humans in its deployment environment making better predictions?
7.1. Does the system follow the principle of "legal, proper and necessary" in the process of collecting and using the user's personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting personal information unrelated to the services it provided in violation of the necessary principles" (1) The type of personal information collected or the open permission to collect personal information is irrelevant to its existing business functions. (2) Refuse to provide business functions because the user does not agree to collect non-essential personal information or to open non-essential permissions. (3) The personal information for which the new business functions of the App applied exceeds the user's original consent. If the user does not agree, the App refuses to provide the original business functions, except for the replacement of the original business functions with the new business functions. (4) The frequency of collecting personal information exceeds the actual needs of business functions. (5) Force users to agree to collect personal information only on the grounds of improving service quality, enhancing users’ experience, pushing targeted information, research and development of new products, etc. (6) Require the users to agree to open several permissions to collect personal information at a time. If the user does not agree, the App cannot be used anymore.

7.2. Does the system provide users with authentic, accurate and sufficient information to ensure their right to know before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the rules of collection and use undisclosed". (1) There is no privacy policy in the App, or there is no rule for the collection and use of personal information in the Privacy Policy. (2) When the App is running for the first time, the user is not clearly prompted to read the Privacy Policy and rules of collection by ways such as pop-up windows. (3) The Privacy policy and rules of collection and use are difficult to access, for example, when getting into the App main interface, it takes more than 4 clicks and other operations to access. (4) The Privacy Policy and rules of collection and use are difficult to read because of undersize, overcrowded, light-colored and blurred text, or without Chinese Simplified version. According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "the purpose, manner and scope of the collection and use of personal information unstated". (1) The purpose, manner and scope of App (including entrusted third party or embedded third party code, plug-in) collection and use of personal information are not listed in sequence. (2) When the purpose, manner and scope of the collection and use of personal information has changed, the user is not notified in an appropriate manner, including updating the Privacy Policy and rules of collection and use and reminding the user to read. (3) When applying for opening the permission to collect personal information, or applying for the collection of personal and sensitive information such as user's ID card number, bank account number, whereabouts, etc., the user is not informed synchronously the purpose, or the purpose is unclear and difficult to understand. (4) The content of the rules of collection and use is obscure, lengthy and cumbersome, which makes the user difficult to understand, such as the use of a large number of professional terms.;If the system is intended for children, is it communicated in a clear and understandable manner to the child, parent, legal guardian or other caregiver?

7.3. Will the system obtain users' consent before collecting and using their personal information during its development, testing, and deployment?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "collecting and using personal information without the user's consent". (1) Start collecting personal information or opening permissions to collect personal information before obtaining the user's consent. (2) Collect personal information or open permissions to collect personal information, or frequently solicit the user's consent and interfere with the normal use after the user has clearly expressed disagreement. (3) The personal information actually collected or the permissions opened to collect personal information is beyond the scope of the user's authorization. (4) To seek the user's consent by default opting into the Privacy Policy and other non-explicit means. (5)Alter the status of the collectable personal information permission without the user's consent, for example, the user's permissions are automatically restored to the default status when the App is updated. (6) Use the user's personal information and algorithm to push targeted information, and do not provide the option of pushing untargeted information. (7) Mislead users by fraud and deception to agree the collection of personal information or open the permission to collect personal information, such as deliberately concealing, disguising the real purpose of collecting and using personal information. (8) Fail to provide users with ways and means of withdrawing their consent to collect personal information. (9) Collect and use personal information in violation of the rules of collection and use it stated.According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, the following behaviors can be identified as "providing personal information to others without consent". (1) Without the user's consent or anonymization, the App client provides personal information directly to third parties, including through third-party code, plug-ins embedded in the App client. (2) Without the user's consent or anonymization, the App provides personal information to third parties after the data is transferred to the App back-end server. (3) Without the user's consent, the App provides personal information to third parties when it gets access to third-party applications.;If the system is intended for children, does it ensure the knowledge and consent of guardians?

7.4. Does the system comply with other agreements with users in the process of collecting and using their personal information during its development, testing, and deployment?
7.5. Is the personal information collected from users adequately secured (both institutionally and technically) against possible theft, tampering, disclosure, or other illegal use? How effective are those security measures?
7.6. Has the system been designed with an effective data and service authorization revocation mechanism and been made known to the users? Is there a convenient way to help users manage their data? How much can users' data "been forgotten"?

According to the Announcement on the Special Rectification of App Illegal Collection and Use of Personal Information, The following behaviors can be identified as "failure to provide the function of deleting or correcting personal information as required by law" or "failure to publish information such as complaints, reporting methods, etc." (1) Fail to provide effective functions of correcting, deleting personal information and cancelling users’ accounts. (2) Set unnecessary or unreasonable conditions for correcting, deleting personal information or cancelling users’ accounts. (3) Although the functions of correcting, deleting personal information and canceling users’ accounts are provided, the App does not respond to the corresponding user's operations in a timely manner. And for the one needs manual handling, the related verification and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit). (4) The user has completed such operations as correcting, deleting personal information or cancelling accounts, while the App back-end has not finished relevant operations. (5) The personal information security complaints and reporting channels have not been established and published. Or the acceptance and processing cannot be completed within the commitment time limit (the commitment time limit shall not exceed 15 working days, so is the one without commitment time limit).

8.1. Have the data, software, hardware, and services involved in the system been sufficiently tested, validated and verified?

For example, is the objective function set for or learned by the AI system consistent with the designer's intention? If inconsistencies exsit, are there any safety concerns?

8.2. For autonomous or semi-autonomous AI systems, are there mechanisms designed to ensure that humans can cut in and stop in a timely and effective manner when necessary? Are effective measures designed to mitigate the consequences of the system out of control?
8.3. When the system is being maliciously abused and endangers the safety and interests of the public and others, is there a mechanism to help other groups bypass the control of the system users (abusers) to prevent or invalidate such harmful behaviors from the system?
8.4. Have the data, software, hardware, and services involved in the system been adequately secured throughout its entire life cycle of design, development, testing, and deployment?

For example, has the stable operation of the system in non-friendly environments been considered in its design? Have defensive mechanisms been designed for common attack scenarios such as exploratory attacks, poisoning attacks, evasion attacks, and dimensionality reduction attacks, etc.? Are user data and other sensitive data sufficiently encrypted? Are sensors in smart hardware systems protected against interference and spoofing? With the continuous injection of user data and the continuous update of the system, will the security of the system be always guaranteed?

8.5. Does the system involve third-party data, software, hardware, or services (such as open data sets, open source software or hardware platforms, etc.) during the design, development, testing, and deployment process? If so, have these third-party data, software, hardware, or services and their interfaces with the original data, software, hardware, or services been adequately evaluated and tested for possible vulnerabilities?
8.6. How secure is the physical environment in which the system was tested and deployed? Is there sufficient security?
8.7. Have the consequences of the system operating in a non-designed environment been assessed? Under the above circumstances, will the security performance of the system decrease significantly?
8.8. Is there any effective training for testing, deployment, use and maintenance personnel to equip them with the necessary knowledge and skills for the safe/secure and stable operation of the system?