Дата публикации: 13.10.2024
Magauiya Abay Darkhanuly
SDU University
Kaskelen, Kazakhstan
Abstract
This paper investigates the ethical concerns and biases associated with artificial intelligence while proposing solutions to these challenges. It examines historical issues related to advanced technologies, current problems arising from artificial intelligence, and potential future risks posed by its widespread use.
Through a comparative analysis of international approaches, including legislative initiatives in developed countries, the paper highlights effective strategies for managing artificial intelligence ethics. Based on this analysis, specific recommendations are provided for Kazakhstan to address potential ethical issues.
The paper proposes the creation of an AI Ethics Council and the integration of ethical education for artificial intelligence developers as effective methods to tackle these challenges. These measures aim to prevent artificial intelligence misuse, foster public trust, encourage responsible innovation, and align the country with global standards in artificial intelligence regulation and ethical practices.
Key words: Artificial Intelligence, AI, Ethics, Bias, AI Regulation
There is a common perception that artificial intelligence (AI) has only begun to experience rapid development in recent decades. However, researchers have been exploring this field for over half a century [1, page 2].
Over these years of research, numerous new technologies and patented solutions in AI have been developed, with each stage of progress contributing to its growth and expansion. While many predictions about an impending AI boom were made in the past, the real breakthrough came with the launch of ChatGPT in November 2022. This moment marked a turning point, underscoring that AI has become an integral part of daily life and ushering in a new era of technological advancement.
The foundation of AI is built upon complex algorithms and large datasets. Its effectiveness heavily depends on the quality of the training data and the code itself. High-quality training materials and well-written code enable AI to produce results that meet user needs and perform highly complex tasks.
Previously, challenges included managing large datasets and limitations in computational power. However, over time, data processing has been optimized and computational resources have increased, making AI development more accessible. Today, advancements in technology allow many individuals to create and utilize their own AI systems.
In the current environment, it is crucial to address and update ethical issues surrounding AI promptly. As AI becomes more widely available and companies offer tools for building AI systems, the lack of clear ethical frameworks in some countries, can lead to misuse of technology, discrimination, and threats to privacy.
AI is increasingly influencing decision-making, data analysis, and user interaction. Addressing key ethical challenges will help ensure transparent use of AI, avoid many potential problems, and clearly define the boundaries of permissible use. This will facilitate the safe and responsible application of technology and help individuals better understand existing limitations and regulations.
Historically, each new technology has brought convenience and simplified many aspects of life. However, experience has shown that without regulation and standardization, new inventions can also have negative consequences.
For instance, while cars have made transportation incredibly convenient, approximately 1.19 million people die annually due to car accidents [2]. This occurs despite existing regulations and strict penalties. Similarly, airplanes, nuclear power plants, and other technologies have become integral parts of our lives but also pose risks to humanity as a whole.
A notable example of AI misuse is the Cambridge Analytica scandal, where attempts were made to manipulate public opinion during the U.S. presidential elections. AI can lead to numerous ethical and biased issues across various fields, which need to be addressed to prevent negative impacts on society [3].
The use of deepfakes can mislead people, and even AI voice bots can become tools for hacking or other fraudulent schemes.
Even if AI developers do not intend harm, they may unintentionally create biased systems. For example, if AI is developed predominantly by one demographic group, it may reflect their interests and perspectives, leading to biases against other groups. A notable issue arises with gender bias: AI systems created primarily by men may not account for the needs of women [4].
Additionally, outdated or biased training data can be problematic. Training AI on materials containing discriminatory or racist views from the past can result in systems that perpetuate these ideas, violating the rights of modern individuals. Therefore, it is essential to consider ethical and cultural aspects during AI development and use.
The development of AI might not progress as expected without proper regulation and motivation. However, with appropriate measures, AI could significantly contribute to a country's economic growth. Various countries have already begun addressing this issue, tailoring their approaches to specific needs and characteristics.
The European Union was among the first to introduce a classification system for AI based on levels of risk, applying corresponding measures to each level. This approach helps identify risks in advance and develop strategies for their management or mitigation [5]. Additionally, the EU's General Data Protection Regulation (GDPR) addresses privacy concerns related to AI, allowing individuals to manage their personal data processed by AI systems securely and efficiently.
In contrast, the United States has adopted a more decentralized model, with AI regulations varying by industry and state. However, the AI Bill of Rights, introduced in 2022, emphasizes the protection of civil rights in the use of AI technologies. The National Institute of Standards and Technology (NIST) has also established AI frameworks promoting transparency, fairness, and safety nationwide.
Canada, a significant player in ethical AI, introduced the Montreal Declaration for Responsible AI, which advocates for ethical principles in AI development. This declaration sets a positive direction expected to have a substantial impact.
Addressing ethical and bias issues related to AI in Kazakhstan offers several important benefits that could lead to positive changes within the country. First, it would help prevent the misuse of AI technologies, ensuring they are used for appropriate purposes and significantly reducing violations. Startups, IT companies, entrepreneurs, and the general public would gain greater confidence in the responsible use of AI, positively impacting the sector's growth in Kazakhstan. This trust would foster open discussions and enable swift and effective problem resolution.
Moreover, strengthening AI regulations would protect constitutional human rights, which is crucial in an increasingly online world where AI presents significant challenges. Aligning Kazakhstan's standards with global norms would also attract potential investors and international companies, encouraging them to establish offices and develop AI technologies within the country.
Finally, clear regulations and standards would provide AI developers and users with a defined framework for their work and innovation. This would help businesses confidently create and apply technological solutions, particularly for commercial purposes. Without such frameworks, standardization and regulation, development in this field would face significant hurdles.
To tackle these concerns and prevent negative consequences, it is important to establish an AI Ethics Council in Kazakhstan. This council should comprise experienced researchers and practitioners from various fields, such as programmers, legal experts, citizens, entrepreneurs, and policymakers. The council would develop guidelines for AI creation and influence legislative initiatives. If functioning effectively, this group could offer balanced solutions that consider all stakeholders' perspectives.
Additionally, developing standards and certification processes for advanced AI technologies is essential. An AI Ethical Code could be created and regularly updated to reflect technological advances and evolving concerns.
Incorporating ethical principles into educational programs for programmers and AI developers is also crucial. Without an understanding of ethical boundaries and potential consequences, developers may inadvertently create systems that violate human rights or cause other serious issues.
Literature:
1. Dwivedi, Y.K., Sharma, A., Rana, N.P., Giannakis, M., Goel, P., & Dutot, V. (2023). Evolution of artificial intelligence research in Technological Forecasting and Social Change: Research topics, trends, and future directions. Technological Forecasting and Social Change, 192, art. no. 122579.
2. "Road traffic injuries." World Health Organization (WHO). Link - https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries
3. Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720938091
4. Chin, C., & Robison, M. (November 2020). How AI Bots and Voice Assistants Reinforce Gender Bias. The Brookings Institution.
5. European Commission. "Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts." European Commission, April 21, 2021.