AI Ethics : Case Study on Dynamic Sound Recognition
In the age of Big Data, society is being shaped by the analysis and applications of data. Data Science and Artificial Intelligence already changed our world. It is crucial to evaluate the impact of the applications in terms of the effect on societal values. The drawbacks and the benefits should be assessed in terms of framework, impact, and methodology. These applications are already in our daily lives encompassing fields in agriculture, air combat and military training, education, finance, health care, human resources and recruiting, music composition, customer service, reliable engineering and maintenance, autonomous vehicles and traffic management, social-media news-feed work scheduling and optimization, and several others. 
In 2019, European Commission published Ethics Guidelines for Trustworthy Artificial Intelligence  prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). Seven core principles are published covering the development, deployment, and procurement of AI: (i) Human agency and oversight (ii) Technical robustness and safety (iii) Privacy and data governance (iv) Transparency (v) Diversity, Non-discrimination, and Fairness (vi) Societal and Environmental Well-being (vii) Accountability.
Case Study: Dynamic Sound Recognition
Here I would like to take ‘dynamic sound recognition as a case study that has been recently found commercial success. In my community, it has been used in identifying music with input from short sequences through a microphone. As a music enthusiast, I have been already using its first-generation algorithms where particular songs are matched with the most likely source and identify performing artist and song title. A research and development company developed a mobile app Epimetheus through the improvement of sound recognition algorithms.
Epimetheus can recognize subtle signals and classify environmental noise in addition to its recognition of human voices, advertisements, and music. They have advanced their algorithm though incorporating machine learning which enabled its input variations. This advancement provides information about the sounds being processed like identification of the person that speaks. If it is an advertisement it can give links to the website that the product is being sold.
The ethical concern on the application started to rise when one company (Cronus Corp.) wanted to acquire Epimetheus and incorporate its sensing technology, databases, and information provisions into its products. The corporation wanted assurance on the risks of the ethical concerns for minimizing the unanticipated harms.
The researchers realized a potential error after adversarial testing where a transgender user of Epimetheus is misidentified. Here the limitations on data (where transgender individuals comprise only a small percentage of the world population) caused material harm for transgender individuals. This small percentage error will be scaled up upon adaptation of Epimetheus by a large technological company. On the user side, this will mean that algorithm might categorize individuals in ways that did not match their gender identity multiple times per day. This is an illustrative example where minority populations are being loosed out.
This ethical assessment made the algorithm designers revise the framework they use. They have tried new diverse input data into training sets. However, the error rate only marginally reduced in categorizing the sex of transgender persons from their sounds. The team concluded that an entirely new strategy is required.
Although the calculated error rate is around 0.016% for the identified issues, several reviewers argued that the only acceptable rate of error should be zero in these instances where such an error might harm members of already marginalized groups. 
After an assessment of the risks and new engineered algorithms, they concluded that with this technology it is impossible to have zero error rates in the identification of transgender voices. With this acknowledgment, they deleted labeling categories that are giving poor results for marginalized groups. The argument of the team for this ad-hoc solution is to allow the technology to mature further.
These experiment phases are needed for technological companies to achieve perfect accuracy. To face the challenges and new emerging dilemmas the best is to address these by conducting interdisciplinary development researches with diverse teams from different backgrounds. These unprecedented areas are for us! We must have ethical communities within data science framework development, volunteered people to experiment with the products and services to serve social welfare.
 Lo Piano, S. Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit Soc Sci Commun 7, 9 (2020).
 Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
 Princeton Dialogues on AI and Ethics. https://aiethics.princeton.edu/