Artificial Intelligence Ethics Policy

DATA SCIENTIST NETWORK ARTIFICIAL INTELLIGENCE ETHICS POLICY

INTRODUCTION
Data Scientists Network is an organization with a vision to build a world-class Artificial Intelligence (AI) knowledge, research and innovation ecosystem that delivers high-impact transformational research, business use applications, AI-first start-ups, support employability and social good use cases. We are committed to raising one million AI talents in 10 years and building AI solutions to enhance the quality of lives of 2 billion people in emerging markets.  We are poised to accelerate Nigeria’s socio-economic development through a solution-oriented application of machine learning in solving social/business problems.

As an AI organization we understand the need to have an AI responsible principle that guides the development and deployment of AI in our organization. We are committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust, hence the reason for this policy.

PURPOSE
The purpose of this Policy is to provide the highest-level principles and expectations of Data Scientist Network, and its subsidiaries (collectively, the “Company” “DSN”) regarding the duties and obligations of all Company personnel to ensure standard ethical practice when developing and deploying any AI products. This Policy aligns with the DSN Code of Conduct and applies to Company Directors, Employees, consultants, and external contractors working for DSN on temporary assignments. (“DSN Personnel”).

WHAT WE USE AI TO DO
At DSN we use Artificial Intelligence to solve interesting problems for social good. Our work explores contributions in education, healthcare and a range of other fields.  We are optimistic about the emerging technologies of AI and the potential they hold. DSN leads research that focuses on addressing major barriers such as low resourced data and optimizing solutions that are locally relevant, to incorporate a larger group in existing AI technology.

OUR PRINCIPLES
The principles guiding us are intended to drive our commitment towards developing AI technologies that embrace diversity yet create a discipline to ensure that AI is used responsibly. These are not just theoretical concepts but concrete standards that will actively govern our research and product development and will impact our business decisions. To achieve this, we are flexible on how we respond to the newer methodologies AI evolution poses.
We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

  1. Beneficence: DSN is committed to creating AI technology that is beneficial to humanity. We believe the development of AI should ultimately promote the well-being of all sentient creatures hence we prioritize human well-being as an outcome in all system designs. Our systems and products will be developed for the common good and the benefit of humanity as well as preserving dignity and sustaining the plane DSN is dedicated to developing AI that is beneficial to humanity. We believe that in order to promote inclusiveness, we must focus on people. The goal of our research will be to develop AI solutions that do not jeopardize the safety of others. This implies we will not participate in or collaborate on projects that may misinform potential users or expose their personal information. We will assess a wide range of societal and economic variables as we consider the possible development and use of AI technology. We shall only proceed if the overall likely advantages outweigh the foreseeable dangers and negatives.
  2. Fairness: We believe that the development of AI should promote justice and seek to eliminate all types of discrimination. Bearing in mind the risks of bias in AI systems that are influenced by the representations in the training data, we work towards creating fairness and reducing bias in our developments. Our approach will optimize fair balance when representing race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.  We are aware that distinguishing fair from unfair biases is not always simple and differs across cultures and societies. However, we will be consistent with our commitment to providing objective intelligence and will take affirmative steps to identify and mitigate bias.
  3. Reliability and Safety: At DSN, we develop AI technologies that are explainable. This will ensure that whatever we are contributing to research or bringing to the market is well understandable and usable without creating any form of doubt on our processes and results. We will continue to develop new techniques that are consistent with our own and society’s values. Our AI systems will be designed to be appropriately cautious and be developed in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.
  4. Accountability: We believe people should be accountable for their AI System, that is why we are committed to develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes. When we design AI systems, we will ensure it provides appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control. We will be accountable for how our technologies impact the society. We will also ensure our partners, clients and customers are accountable for the deployment of our AI System.
    To develop a more sustainable ethical AI system, we will ensure that we have governance in a place that manages AI development. This may include documenting technical specifications for AI solutions, ensuring compliance and guarding access to system design and operational information. It is also important to ensure that the quality of data used is reliable and well-represented to avoid bias and other societal concerns. The security and privacy of sensitive data will be prioritized in the principles that guide our data gathering, curation, and use. Upon the development and deployment of any AI system, we will define what the metrics for evaluation should be and the benchmark scores that should access their performance. Deployed AI models are monitored in a variety of situations to determine their level of reliability and lifespan, which can help inform decisions about scaling up or spreading the system to new operational settings.
  5. Transparency: The goal of transparent AI is to be able to correctly explain and communicate the results of an AI model. Our transparency principle states that our AI system will be open and transparent about how and why we use AI, as well as the system’s limitations. We will provide the public and our customers with accurate model results. This is to ensure the integrity of our AI approaches and the applications in which the system may perform best, as well as the laws and policies that govern our data, technology, and model reusability.
  6. Respect the Law and Act with Integrity: We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.
  7. Privacy Design: We will incorporate our privacy policies in the development and use of our AI technologies. We will give opportunities for notice and consent, encourage architecture with privacy safeguards, and provide appropriate transparency and control over the use of data. Please read more on DSN Data Protection and Governance Policy

HOW WE USE AND DEPLOY OUR AI SYSTEM
We use AI when it is an appropriate means to achieve a defined purpose after evaluating the potential risks. We document and communicate the purpose, limitation(s), and design outcomes. Our AI Systems are tested at a level commensurate with foreseeable risks associated with the use of the AI.

Our AI systems are used in a manner consistent with respect for individual rights and liberties of affected individuals, and use data obtained lawfully and consistent with legal obligations and policy requirements.

We incorporate human judgment and accountability at appropriate stages to address risks across the lifecycle of the AI and inform decisions appropriately. We maintain accountability for iterations, versions, and changes made to the model.

We identify, account for, and mitigate potential undesired bias, to the greatest extent practicable without undermining its efficacy and utility.

We use explainable and understandable methods, to the extent practicable, so that users, overseers, and the public, as appropriate, understand how and why the AI generated its outputs.

We periodically review our system to ensure the AI continues to further its purpose and identify issues for resolution; and, identify who will be accountable for the AI and its effects at each stage and across its lifecycle, including responsibility for maintaining records created.

In identifying and addressing risk we involve appropriate stakeholders. Such as, consumers, technologists, developers, mission personnel, risk management professionals, civil liberties and privacy officers, and legal counsel to utilize our framework collaboratively, each leveraging their respective experiences, perspectives, and professional skills.

At DSN, we have had a broad scope of AI applications in the different arms we are committed to, such as Learning and Community, Research and Social Good Applications, Corporate Support and Solution deployment, and Start-Ups/Ventures and National Development. The goal is to develop or advance state-of-the-art techniques that improve ease of accessibility for the people that interact with these arms. Some of our impactful works have earned us recognition and promoted inclusion.

OUR FRAMEWORK
To assist with the implementation of these Principles, DSN has also created an AI Ethics Framework to guide personnel who are determining whether and how to procure, design, build, use, protect, consume, and manage AI and other advanced analytics. We also ensure that individuals involved in the design, development, review, deployment, and use of any AI have sufficient training to address the questions and issues raised below

  1. PURPOSE: UNDERSTANDING GOALS AND RISKS
    To guarantee that we create AI that balances desired results with acceptable risk, we must first determine what goals we are attempting to achieve. As a result, we are dedicated to ensuring that the following issues are addressed:
  • The goal we’re aiming towards with AI development, as well as the components that go into it.
  • Is it necessary to use AI to attain this aim, or can we achieve it with less risk using other non-AI related methods?
  • Will AI be successful in accomplishing this aim?
  • What methods are the most suitable/preferred for this use case?
  • Does the efficiency and reliability of the AI in this particular use case justify its use for this purpose?
  • What benefits and risks, including risks to civil liberties and privacy, might exist when this AI is in use? Who will benefit?
  • Who or what will be at risk? What is the scale of each and likelihood of the risks? How can those risks be minimized, and the remaining risks mitigated?
  • Do the likely negative impacts outweigh the likely positive impacts?
  • What performance metrics best suit the AI, such as accuracy, precision, and recall, based on risks determined by mission managers, analysts, and consumers given the potential risks; and how will the accuracy of the information be provided to each of those stakeholders?
  • What impacts could false positive and false negative rates have on system performance, mission goals, and affected targets of the analysis?
  • Engagement with AI system developers, users, consumers, and other key stakeholders to ensure a common understanding of the goal of the AI and related risks of utilizing AI to achieve this goal.
  • Goals and risks documentation.
  1. DOCUMENTATION OF PURPOSE, PARAMETERS, LIMITATIONS, AND DESIGN OUTCOMES
    To ensure Our AI systems are used properly, it is important to communicate (a) what the AI is for, (b) what it is not for, (c) how it was designed, and its limitations. Documentation assists us not only with proper management of the AI, but also with determining whether the AI is appropriate for new purposes that were not originally envisioned.
    Documentation must be in a way that is available to all potential consumers of this AI. The following must be properly documented.
  • Where did the data come from, its downstream uses and shareability.
  • The downstream uses and shareability of the AI.
  • What rules apply to the data as a whole? What rules apply to subsets?
  • The potential risks of using the AI and its output data, and the steps taken to minimize these risks?
  • The use cases for which the AI was and was not specifically designed?
  • The process for discovering undesired bias and the conclusions
  • How to verify and validate the model as well as the frequency with which these checks should be performed?
  1. TRANSPARENCY: EXPLAINABILITY AND INTERPRETABILITY
    DSN AI transparency principle is to use methods that are explainable and understandable, to the extent practicable, so that users, overseers, and the public, as appropriate, understand how and why the AI generated its outputs. While developing AI system answers must be given below:
  • Given the purpose of the AI, what level of explainability or interpretability is required for how the AI made its determination?
  • If a third party created the AI, how would you ensure a level of explainability or interpretability?
  • How are outputs marked to clearly show that they came from an AI?
  • How might you respond to an intelligent consumer asking, “How do you know this?” How will you describe the dataset(s) and tools used to make the output?
  • How was the accuracy or appropriate performance metrics assessed? How were the results independently verified? Have you documented and explained that machine errors may differ from human errors?
  1. HUMAN JUDGMENT AND ACCOUNTABILITY
    The potential purposes and applications of an AI could range from basic business tasks to highly sensitive intelligence analysis. The assessed risk will determine at which point in the process and to what degree a human will be involved in AI. As we implement our accountability principles the following questions must be resolved.
  • Given the purpose of the AI and potential consequences of its use, at what points, if any, are a human required as part of the decision process?
  • If the AI could result in significant consequences such as an action with the potential to deprive individuals of constitutional rights or the potential to interfere with their free exercise of civil liberties, how will you ensure individual human involvement and accountability in decisions that are assisted through the use of AI?
  • Where and when should humans be engaged? Before the results are used in analysis? Before the outputs are provided for follow-on uses.?
  • Who should be the accountable human(s)? Do they know that they are designated as accountable human(s)? What qualifications are required to serve in that role?
  • How is accountability transferred to another human? What are the access controls and training requirements for those operating at different stages in the AI lifecycle?
  • What does the accountable human need to know about the AI to judge its reliability and accuracy? How may introducing an accountable human produce cognitive biases and/or confirmation bias?
  • Who should be engaged for unresolved issues and disputes regarding the AI or its outputs?
  1. MITIGATING UNDESIRED BIAS AND ENSURING OBJECTIVITY
    In conducting analysis, it is required that we perform our functions “with objectivity and with awareness of [our] own assumptions and risks. [We] must employ reasoning techniques and practical mechanisms that reveal and mitigate bias.” In compliance with the legislations governing such as the Nigeria Constitution, Nigeria Data Protection Regulation (NDPR) certain “biases” will be introduced as we design, develops, and uses AI.
    In mitigating bias, we focus on identifying and minimizing undesired bias. “Undesired bias” is bias that could undermine analytic validity and reliability, harm individuals, or impact civil liberties such as freedom of speech, religion, travel, or privacy. Undesired bias may be introduced through the process of data collection, feature extraction, curating/labeling data, model selection and development, and even in user training. Taking steps to discover bias throughout the lifecycle of an AI, mitigate undesired bias, and to document and communicate known biases and how they were addressed, are critical to long-term reliance on training data sets, to reusing models, and to trusting outputs for follow-on use.
    In view of the above, we ensure objectivity by resolving the following queries in using and deploying our products:
  • How complete is the data on which the AI will rely?
  • Are they representative of the intended domain?
  • How relevant is the training and evaluation data to the operational data and context?
  • How does the AI avoid perpetuating historical biases and discrimination?
  • What are the correct metrics to assess AI’s output?
  • Is the margin of error one that would be deemed tolerable by those who use the AI?
  • What is the impact of using inaccurate outputs and how well are these errors communicated to the users?
  • What are the potential tradeoffs between reducing undesired bias and accuracy?
  • To what extent can potential undesired bias be mitigated while maintaining sufficient accuracy?
  • Do you know or can you learn what types of bias exist in the training data (statistical, contextual, historical, or other)?
  • How can undesired bias be mitigated? What would happen if it were not mitigated?
  • Is the selected testing data appropriately representative of the training data?
  • Based on the purpose of the AI, how much and what kind of bias, if any, are you willing to accept in the data, model, and output?
  • Is the team diverse enough in disciplinary, professional, and other perspectives to minimize any human bias?
  • How will undesired bias and potential impacts of the bias, if any, be communicated to anyone who interacts with the AI and output data?
  1. TESTING
    Every system must be tested for accuracy in an environment that controls for known and foreseeable risks prior to being deployed. All AI System must be tested by giving resolution to the below:
  • Based on the purpose of the AI and potential risks, what level of objective performance for the desired performance metric, (e.g., precision, recall, accuracy, etc.) do you require?
  • Has the AI been evaluated for potential biased outcomes or if outcomes cause an inappropriate feedback loop?
  • Have you considered applicable methods to make the AI more robust to adversarial attacks?
  • How and where will you document the test methodology, results, and changes made based on the test results?
  • If a third party created the AI, what additional risks may be associated with that third party’s assumptions, motives, and methodologies?
  • What limitations might arise from that third party claiming its methodology is proprietary?
  • What information should you require as part of the acquisition of the analytic?
  • What is the minimum amount of information you must have to approve an AI for use?
  • Was the AI tested for potential security threats in any/all levels of its stack (e.g., software level, AI framework level, model level, etc.)? Were resulting risks mitigated?
  1. PERIODIC REVIEW
    All AI Systems should be checked at an appropriate documented interval to determine whether it still meets its purpose and that any undesired biases or unintended outcomes are appropriately mitigated. In view of the above the following must be resolved:
  • How will user and peer engagement be integrated into the model development process and periodic performance review once deployed?
  • Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.?
  • What are the checks for this model?
  • As time passes and conditions change, is the training data still representative of the operational environment?
  • How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed?
  • How much distributional shift or model drift from baseline performance is acceptable?
  • Who is responsible for checking the AI at these intervals?
  • How will the accountable human(s) address changes in accuracy and precision due to either an adversary’s attempts to disrupt the AI or unrelated changes in the operational/business environment, which may impact the accuracy of the AI?

8. STEWARDSHIP AND ACCOUNTABILITY: TRAINING DATA, ALGORITHIMS, MODELS, OUTPUTS OF THE MODELS, DOCUMENTATION
Before the AI is deployed, it must be clear who will have the responsibility for the continued maintenance, monitoring, updating, and decommissioning of the AI. Hence, the below must be resolved accordingly

  • Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?
  • Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?
  • Who is accountable for the ethical considerations during all stages of the AI lifecycle?
  • If anyone believed that the AI no longer meets this ethical framework, who will be responsible for receiving the concern and as appropriate investigating and remediating the issue?
  • Do they have authority to modify, limit, or stop the use of the AI?

9. LEGAL OBLIGATIONS AND POLICY CONSIDERATIONS GOVERNING THE AI AND THE DATA.

  • What authorities, legislations, agreements, or policies govern the collection or acquisition of all sources of data related to the model (training, testing, and operational data)?
  • Who can clarify limitations from the legislations?
  • What legal or policy restrictions exist on the use of data under this authority/legislation? (For example, data subject to the NDPR should be used for a purpose that is compatible with that for which the data was collected).
  • How must data be stored, shared, retrieved, accessed, used, retained, disseminated, and dispositioned under the authority/agreement as well as relevant constitutional, statutory, and regulatory provisions?
  • What authorities or agreements apply to the AI itself, including the use, modification, storage, retrieval, access, retention, and disposition of the AI?
  • Are there any proposed downstream applications of the AI that are legally restricted from using the underlying data?
  • Does combining data with other inputs from the AI create new legal, records management, or classification risks relating to how the information is maintained and protected?

The DSN AI Ethics Principles and framework supplement the statutory and regulatory provisions and do not modify or supersede applicable laws, executive orders, or policies. Instead, they articulate the general norms that DSN elements should follow in applying those authorities and requirements.

POLICY REVIEW
This policy shall be reviewed and consequently approved by DSN Management every two years; nevertheless, there shall be exceptions if pressing matters arise before the review date.