Українська правда

Ethical implementation of AI. How are Ukraine going to regulate the new industry?

Ethical implementation of AI. How are Ukraine going to regulate the new industry?
0

In June, 14 Ukrainian IT companies created a self-regulatory organization to support ethical approaches to the implementation of artificial intelligence in Ukraine. These companies committed to developing innovative products in compliance with the principles of safe use of AI, proclaimed in the Voluntary Code of Conduct.

Other companies can join the newly created organization, provided they meet the criteria. The IT Ukraine Association will assist in its development, and the Center for Democracy and the Rule of Law (CEDEM) will act as the secretariat.

CEDEM says that they have chosen a soft approach to establishing AI regulation, which is called bottom-up: first, the state prepares companies for standards and only then implements legislation. The first stage is a period of self-regulation. We found out what IT companies and experts think about the ethical implementation of AI in Ukraine. But first, let's talk a little about AI regulation in other countries.

AI regulation in the US, EU, China and Japan

In the United States of America, the ethical implementation of artificial intelligence has been one of the key topics in recent years. There are several approaches and initiatives in the country on this issue. For example, in 2022 the White House presented the AI Bill of Rights — a draft document that defines the rights of citizens in the digital age:

▪️protection against algorithmic discrimination;

▪️transparency of AI system solutions;

▪️the possibility of refusing fully automated solutions;

▪️data security and privacy protection.

As experts note, although this program is not mandatory and does not require adherence to basic principles, it is intended to serve as a basis for making political decisions regarding artificial intelligence in cases where existing legislation or policies do not contain relevant recommendations.

It is also worth mentioning the Executive Order on AI — a presidential decree of Joe Biden from 2023, which established requirements for the safety, transparency, and testing of AI before mass adoption. This order was considered the most comprehensive document regulating the AI industry in the United States. It was repealed by President Donald Trump within hours of his inauguration on January 20, 2025.

Speaking of new initiatives from the White House, US President Donald Trump recently signed three executive orders and unveiled an ambitious AI Action Plan that promises to remove regulatory barriers, accelerate the construction of data centers, and promote American technology around the world. The initiative, on the one hand, assumes that the US will no longer restrain the industry with excessive control measures, as is done in Europe. Trump threatened states with strict regulation of artificial intelligence (such as New York) with the loss of federal funding.

This is what the home page of the ai.gov website looks like
This is what the home page of the ai.gov website looks like

And the American president has a website, ai.gov, where you can find information about all his AI initiatives. "America is the country that started the artificial intelligence race. And, as president of the United States, I am here today to declare that America will win," Trump said in a speech to technology executives in July.

In turn, the US National Institute of Standards and Technology (NIST) in 2023 created the AI Risk Management Framework, a risk management framework that helps companies create reliable and secure systems.

Large technology corporations such as Google, Microsoft, OpenAI, IBM, Meta, Amazon, are also developing codes of ethics for their products.

Here are some of the principles of tech giants:

▪️transparency of algorithms;

▪️prevention of bias;

▪️responsibility of developers for the consequences of use;

▪️environmental and energy efficiency of systems.

In July 2023, seven leading AI software developers agreed to take on voluntary security commitments proposed by the US government. In particular, the companies agreed to have their products tested by independent experts. The tech giants also promised to prioritize security and collaboration over competition.

To study the social consequences of AI, leading American universities are creating their AI ethics labs, civil society organizations are actively discussing human rights and privacy protection, but the general approach is: "innovation first, but responsibly." It is about allowing companies to quickly implement new solutions, but at the same time creating flexible standards that protect human rights.

As for the European Union , here everything seems to be happening through strict legislative regulation. In 2024, the European Parliament adopted the EU AI Act, the world's first comprehensive law on artificial intelligence. The key principles of the document are:

▪️classification of AI systems by risk levels (from minimal to unacceptable);

▪️prohibition of "dangerous practices" (for example, social scoring as in China);

▪️mandatory certification and audit for high-risk systems (medicine, transport, justice);

▪️strict liability of companies for violations.

EU AI Act: comprehensive implementation schedule; photo from fpf.org
EU AI Act: comprehensive implementation schedule; photo from fpf.org

In general, the European Union will focus on ethics, human rights, and preventing abuse, but excessive regulation could slow down the development of startups.

But China recently proposed the creation of an international organization for cooperation in the field of artificial intelligence (AI-UN? — ed.). As stated by the country's Prime Minister Li Qiang, the development of AI should not turn into a closed area for a few countries and companies. According to him, currently the approaches of different countries to regulation are very different, and this prevents the creation of an effective control system. The Prime Minister of the People's Republic of China called for the coordination of efforts and the formation of a common framework.

Li Qiang, Prime Minister of the People's Republic of China; photo: Reuters
Li Qiang, Prime Minister of the People's Republic of China; photo: Reuters

As you might have guessed, AI regulation in Chinese is state control and the use of AI as a tool of power. It includes state permits, censorship, mandatory licensing, etc. Examples of the use of such AI regulation are social scoring, mass surveillance, content control. These processes in the country are mainly regulated by the 2017 document A Next Generation Artificial Intelligence Development Plan, but there are also many new rules and regulations that affect the development of China's artificial intelligence industry.

Experts at the Massachusetts Institute of Technology note that China’s policy of fragmented and disparate AI regulation is gradually changing. According to these principles, the state is introducing, for example, one set of rules for algorithmic recommendation services of systems similar to TikTok, another for deepfakes, and another for generative artificial intelligence.

Japan, in turn, is focusing on the concept of Society 5.0 — a society where digital technologies (AI, IoT, robotics) are integrated into everyday life to improve people's quality of life. This is a human-centric approach, where artificial intelligence should help not only business, but also society as a whole.

The main values that the Japanese adhere to in AI regulation are transparency and explanation of algorithms; non-discrimination and fairness; respect for privacy; coexistence of people and machines, not replacement of people. Japan has AI R&D Guidelines from 2017 and AI Governance Principles from 2022, which spell out the main principles.

What will AI regulation be like in Ukraine?

In June 2024, the Ministry of Digital Affairs presented the White Paper on AI Regulation, a document that details Ukraine's approach to regulating artificial intelligence in the future. The head of the department, Mykhailo Fedorov, noted at the time that the regulations proposed in the book would not apply to the military sphere.

Screenshot from the White Paper on AI Regulation from the Ministry of Digital
Screenshot from the White Paper on AI Regulation from the Ministry of Digital

A year later, Grammarly, MacPaw, LetsData, DroneUA, WINSTARS.AI, Gametree.me, YouScan.io, EVE.calls, Valtech, LUN, Yieldy, SoftServe, Uklon, and Preply founded an organization in Ukraine that will support ethical ways to implement AI.

The goals and objectives of this self-regulatory organization (SRO) are:

▪️promoting the ethical and responsible use of AI;

▪️implementation of the provisions of the code;

▪️monitoring compliance with established standards among SRO members;

▪️supporting innovation and sharing experience between SRO members.

It is reported that the companies have committed to reporting annually on the implementation of secure solutions in their products, and monitoring of compliance with the principles will be carried out by the Center for Democracy and the Rule of Law and the IT Ukraine Association.

MacPaw told us that the company joined the organization because it is important for it to define the rules for working with AI, not just adapt to changes. "One of MacPaw's main focuses today is working with AI and integrating it into the company's products. Therefore, our participation in the first such organization in Ukraine is a natural and logical step. The initiative will help companies develop AI systems ethically, as well as prepare for European regulation and the law on AI in Ukraine," said Volodymyr Kubitsky, Director of AI at MacPaw.

Volodymyr Kubitsky, Director of AI at MacPaw; photo from Facebook
Volodymyr Kubitsky, Director of AI at MacPaw; photo from Facebook

According to him, for MacPaw, the ethical use of AI means creating technologies with respect for users, because AI should be an assistant, a tool for human empowerment, and not a means of manipulation or control.

"In working on the smart AI assistant Eney for macOS, we pay a lot of attention to user privacy and ensuring their control over personal data. We are thinking about how to implement the right to be forgotten approach so that users can completely delete the information that the system has stored about them. We don't want the system to know more than it needs to," explained the Director of AI at MacPaw.

He also gave examples of unethical use of AI — it could be the use of users' personal data without their consent or disinformation, when algorithms spread false information to influence public opinion. "In fact, AI can be used in various malicious actions, including cyberattacks or the creation of deepfakes. Therefore, today it is important to develop an effective system for regulating AI and its use," the specialist noted.

According to Volodymyr Kubitsky, one of the biggest risks of unethical use of AI is privacy, because people increasingly trust AI with their personal data, thoughts and preferences and do not always understand what consequences this may have. "All this information enters the system and can theoretically be stored and transmitted somewhere. The system can accumulate knowledge about a person and form their portrait: psychological profile, reactions, weaknesses. Against this background, there is an opportunity for manipulation, even imperceptible ones. To avoid this, clear rules and a culture of responsible use are needed. It is worth understanding that the system knows about you and having the opportunity to delete or limit it," the specialist concluded.

According to Olena Nahorna, an analytical linguist at Grammarly, AI must be ethical, which involves constant work to prevent bias and discrimination.

Olena Nahorna, analytical linguist at Grammarly; photo from Linkedin
Olena Nahorna, analytical linguist at Grammarly; photo from Linkedin

"People should remain in control of their data and how they use AI recommendations. We consider unethical AI that misleads users, creates systemic inequalities for certain groups, is used without proper testing for harmful outcomes (such as biased language or offensive content), or deprives users of control," the linguist noted.

SoftServe told us that for the company, ethical use of artificial intelligence means transparency of decision algorithms, protection of personal data and privacy, prevention of discriminatory biases in models, and a focus on creating real value for people and business.

"We also clearly see examples of unethical use of AI, from manipulation of public opinion through deepfake content to the use of technology in mass attacks or fraudulent schemes. These are serious threats and challenges that need to be addressed. That is why we support initiatives that form the framework for the responsible implementation of AI in Ukraine. At SoftServe, we have developed training programs for teams so that employees can work with AI correctly and safely," explained Oleg Denys, co-founder and member of the Board of Directors of SoftServe.

Oleg Denys, co-founder of SoftServe; photo from biz.nv.ua
Oleg Denys, co-founder of SoftServe; photo from biz.nv.ua

According to him, SoftServe implements only those tools that guarantee security, and each product is checked by information security teams, IT, legal department, and others, and only after that can it become part of the infrastructure.

Oleksandr Chumak, who works as Chief Technical Officer at Uklon, told us that 16 applications of LLM and ML models operate in Uklon services, and there is also an internal assistant, UklonAI, which helps automate routine tasks.

Oleksandr Chumak, Chief Technical Officer Uklon; photo: Uklon
Oleksandr Chumak, Chief Technical Officer Uklon; photo: Uklon

"Uklon has internal policies on the use of AI tools, so we are ready to share this experience to raise awareness of the ethical principles of working with AI. Cooperation within a self-regulatory organization will help build a responsible and innovative AI ecosystem in Ukraine," the specialist noted.

The company also believes that human control is mandatory, especially in critical systems, as it allows for timely correction of errors, adaptation to unforeseen situations, and guarantee safety.


"In my opinion, the responsible use of AI tools is a necessity that helps build trust in technology. AI is already changing the economy, education, and medicine, so the future depends on how we implement it. Unethical use of AI poses serious risks. To prevent them, a comprehensive approach is needed. This includes self-regulation through a voluntary code of ethics, state regulation, and adaptation of legislation to European norms, such as the EU AI Act. In addition, raising awareness among both developers and end users, as well as open interaction between business, government, and public organizations, are key," said Oleksandr Chumak, CTO at Uklon.

According to Oleksandr Krakovetsky, the author of a book about generative AI, CEO at DevRain and CTO at DonorUA, formally we are not dealing with the creation of a new organization in Ukraine, but with the signing of a memorandum by a number of Ukrainian IT companies.

Oleksandr Krakovetsky, CEO at DevRain and CTO at DonorUA; photo from dou.ua
Oleksandr Krakovetsky, CEO at DevRain and CTO at DonorUA; photo from dou.ua

"Such a document has no legal force and rather performs a declarative function designed to draw attention to the topic of ethical use of AI. The problem is that ethics, unlike legislation, does not have clear and universal criteria. Politically, declaring the "ethical use of AI" looks correct, but in practice there may be many nuances," explained Oleksandr Krakovetsky.

For example, if a company uses services from OpenAI, Anthropic, or Midjourney, against which copyright holders file lawsuits, can this be considered a violation of ethics? If the automation of banner design using AI leads to job cuts, is this ethically acceptable? According to the expert, such examples show that we need to talk not only about ethics as such, but about specific standards and frameworks.

"Some of the provisions of the memorandum do not belong to the ethical plane, but concern specific legislative requirements - for example, issues of privacy and data protection are already regulated by such documents as the GDPR and AI Act in the EU, HIPAA in the USA, and the Law on Personal Data Protection in Ukraine. In this case, it is no longer about ethics, but about mandatory norms that have legal force. Reading the document, one gets the impression that any real project, if all the stated principles are followed, turns into an excessively long and expensive process," the specialist said.

He doubts that Ukrainian companies with relatively limited resources will be able to meet these requirements if even giants like OpenAI or Meta do not demonstrate sufficient transparency in their own model training and testing processes.

"The initiative deserves support and it is positive that companies are drawing attention to the topic of ethical use of AI. However, I consider an approach that has specifics, standards, and clear control mechanisms to be more valuable. There are also concerns about potential legislative regulation, as it may be overly simplified and not take into account all the nuances of technological development," the expert noted.

And here are some more thoughts from Oleksandr Krakovetsky about the threats of unethical use of AI and countering these threats:

▪️AI can become a tool for controlling people, manipulating, deceiving, blackmailing, or financial fraud schemes; these are universal threats that are amplified by the scale and speed with which modern algorithms work;

▪️well-known companies are not interested in violating ethical or legal norms, since the greatest risk for them is the loss of reputation; however, this does not stop attackers who actively use new tools for attacks and enrichment; this is why systematic countermeasures are needed at the state and industry levels;

▪️one of the key mechanisms that can be applied is the certification of AI-based solutions; the logic can be the same as in the case of games or medical drugs: if the risk is low, internal self-assessment is sufficient, but if the solution affects people's lives and safety, independent verification and certification are needed;

▪️A similar principle may apply to AI systems; companies can independently determine internal quality control procedures for their products, but the next step should be to pass independent industry benchmarks; such assessments can identify or confirm the absence of signs of bias, manipulative practices, or other unethical aspects.

LetsData told us that they are building an AI radar that detects information operations, fake accounts, and synthetic identities in real time, that is, they are working with examples of how technology can be used in a malicious way — from deep fakes to automated propaganda distribution.

"For us, ethics is not a formality, but a requirement for engineering. We joined the self-regulation initiative to consolidate the rules of the game together with our colleagues: transparency, audit, regular reporting, and practical exchange of experience. This will allow the industry to develop faster, while increasing the level of trust in Ukrainian AI solutions on the global market," said Andriy Kusy, co-founder and CEO of LetsData.

Andriy Kusy, co-founder and CEO of LetsData; photo from ucu.edu.ua
Andriy Kusy, co-founder and CEO of LetsData; photo from ucu.edu.ua

According to him, the danger of irresponsible artificial intelligence is that it scales harmful actions: if previously a disinformation campaign required hundreds of people and weeks of work, now it can be automated in hours.

"The risks range from undermining trust in society and interfering in elections to direct financial losses for businesses. To prevent these threats, a comprehensive approach is needed: transparent risk management (threat modeling and model security testing); human control in decisions; data and intellectual property protection; open reporting and common self-regulatory rules," noted the CEO of LetsData.

Board Advisor, North America Representative at R&D Center WINSTARS.AI Mariana Tataryn told us that the company considers hidden data collection, "black boxes" in critical areas, algorithmic discrimination, or the use of AI for manipulation and fakes to be unethical uses of artificial intelligence.

Mariana Tataryn, Board Advisor at WINSTARS.AI
Mariana Tataryn, Board Advisor at WINSTARS.AI

"We support the idea of self-regulation and implement practices that help avoid such risks. It is important to act together: business must take responsibility and implement self-regulation through transparent codes and independent audits, the state must establish clear and modern rules of the game, and society must develop critical thinking and digital literacy," the expert noted.

According to the founder of EVE.calls, Oleksiy Skrypka, joining the self-regulated initiative is an investment in trust. "We have been working with voice artificial intelligence since 2016 and have been implementing projects in Ukraine, the USA and Europe for 9 years. Joining the self-regulated initiative is an investment in trust: both in our company and in the Ukrainian IT industry in the eyes of the EU and international partners. The emergence of such an organization in Ukraine shows the maturity of the AI market," the entrepreneur noted.

Oleksiy Skrypka, founder of EVE.calls
Oleksiy Skrypka, founder of EVE.calls

In his opinion, it is important to understand that technology often outpaces laws, so sometimes there is no formal violation, but the damage is real.

"This is evident in everyday examples: dynamic prices in taxis or delivery, when the algorithm increases tariffs in certain areas or for "solvency" profiles and hidden discrimination occurs; credit scoring based on indirect criteria - device type, time of activity, which gives refusals without transparent reasons and actually creates "digital discrimination"; algorithmic selection of resumes, which reproduces historical biases of companies and unfairly cuts off candidates," explained Oleksiy Skrypka.

Valtech Ukraine also noted that it all comes down to trust.

Svitlana Herasymova and Roman Kuziv from Valtech Ukraine
Svitlana Herasymova and Roman Kuziv from Valtech Ukraine

"If people do not believe that AI is used honestly and safely, they will simply abandon it. And this means a loss of chances for development: for business, for science, and for the state. In addition, artificial intelligence directly affects our rights - to privacy, justice, and freedom. If we do not set limits, we can get a tool of control instead of a tool of progress. Like any tool, artificial intelligence can be used both for selfish purposes and to harm society," said Svitlana Herasymova, Managing Director of Valtech Ukraine, and Roman Kuziv, Head of the Company's Technology Department, in a comment.

But the brand and business director of LUN, Denys Sudilkovsky, told us that joining his company to the organization is an acknowledgment of one's own responsibility.

Denys Sudilkovsky, brand and business director of LUN; photo from cedem.org.ua
Denys Sudilkovsky, brand and business director of LUN; photo from cedem.org.ua

"Who could have thought 20 years ago that social networks would influence democracies? And how will artificial intelligence change our world in the next 10? We may not define these processes, but at least we are trying to keep the consequences and changes that come with the spread of artificial intelligence within ethical frameworks - through the development of rules and practices for using innovations," said Denys Sudilkovsky.

According to him, LUN has been helping people buy their own homes for 17 years and tells buyers the truth about their dream apartment, neighborhood, or even an entire city. And the biggest threat in this market is that AI is blurring the boundaries of truth.

"Taking fake photos of a non-existent apartment or even "filming" a video inspection is a "golden age" for scammers. This is a challenge: how to protect people from fraud and verify not only housing, but also the sellers themselves as real living people? In addition to our own security solutions, we integrate services from "Diya" into our products, because we believe in the power of collective efforts," the entrepreneur explained.

According to the top manager, humanity is not keeping up with changes.

"Who should be held responsible if AI offends or harms a user? Should AI have freedom of speech? How can we trust the system if we don't know what data it is trained on and what algorithms or restrictions it is guided by? And does AI really serve our interests, and not the interests of corporations or other states? We, as adherents of innovation, want to help make the world safer for everyone. And this is, first of all, fair rules in the interests of man," Denys Sudilkovsky concluded.
Share:
Посилання скопійовано
Advert:
Advert: