ethicalai

Standing position of AI ethics

There have always been ethical issues around artificial intelligence. Due to the influence of movies such as "Ex Machina" and "Transcendence," which feature humanoid AI independent of humans, this theme is likely to be familiar to ordinary people. These and older films raise questions about work ethics, slavery, and what it means to be "human" in the first place. Although these fictional representations of AI ethics have yet to surface in the real world, AI ethics issues are regularly in the news. More recently, racial bias and discrimination within Google's AI ethics team came into question. The question was raised whether a team with such internal problems could bring justice and equality to the world of artificial intelligence. Can a racist team conduct a critical AI bias study or properly rate racist content on sites like YouTube?

AI, on the other hand, has many more ethical concerns. Here are some representative examples.

labor concerns

While this is not necessarily seen as an ethical issue, it may be the most common criticism of artificial intelligence solutions. From customer service departments to transportation, retail and manufacturing, AI will inevitably replace all categories of jobs.it is said. Long-term predictions suggest that AI will create new jobs in place of jobs that it eliminates, but it will solve the problems of job separation and unemployment that may arise with the progress of digital transformation. It will not be.

healthcare concerns

There are many legitimate and relevant applications of AI in healthcare, from insurance administration to billing systems. While patient privacy and security issues are often raised, deeper concerns in healthcare surround the use of AI in cases such as patient triage, analysis of medical images and tests, and diagnosis itself. Problematic biases in medical AI havealready been questioned. A large part of these problems are due to the lack of diversity in the data available for AI training, which is unlikely to be easily resolved.

Law Enforcement Concerns

Similar to healthcare, law enforcement AI applications are highly susceptible to bias in the training data used to create automated solutions. Time and time again, biases in facial recognition software used by law enforcement make headlines, prompting calls for reform and, in some cases, enacting new policies governing law enforcement's use of AI.

Concerns about personnel and admissions

AI is already being used in human resource management and school entrance exams, and most of the time it works just fine. However, there are structural biases that result in applicants and candidates being rejected based on attributes that government agencies and schools claim they don't consider, such as race, gender, income level, and national origin. I see a lot of cases. In many cases, these prospective students and candidates for employment are unaware that they are being evaluated in an artificial manner and have not consented to their information being used in this way. .

The above concerns may take time to resolve, but the good news is that they are all considered manageable. Several solutions are being discussed around the world to identify and correct these ethical issues, and some have already been implemented.

How will AI ethics evolve?

One of the most well-known ways of setting and enforcing ethical standards in the AI community is to set up national and international external AI ethics committees to evaluate new and old AI and discourage unethical forms of artificial intelligence. It is to impose sanctions on organizations and individuals that manufacture and use them. In recent news, the European Union (EU) is enacting the Artificial Intelligence Act to establish regulations around AI and its use. A precedent for this far-reaching effort is given by the GDPR regulation, but it remains to be seen whether it will be enforced and whether it will be useful. There is no US counterpart for this type of regulation, but the Department of Defense is considering it.

Another solution, which is already being implemented by some companies but seems easier to establish, is to set up an AI ethics committee in-house to enforce company-wide policies. IBM and Microsoft have already done it to some extent, and it is much more successful than the Google example mentioned at the beginning of this article. These internal committees strive to prioritize customer data privacy and organizational transparency, allowing researchers and data scientists inside and outside the company to assess the ethical standards of various AI solutions throughout development. It can also be called “explainable AIthat

Regardless of how individual countries, regions and companies proceed, the global recognition of the need for ethical standards has led toand the application seems to be coming to an end. In the long term, this is good news, especially for AI ethics-critical sectors such as healthcare and law enforcement, where AI strives to establish trust among potential adopters.

▼ Business problem-solving AI service that utilizes the data science resources of 25,000 people worldwide
Click here for details

3aac0ec642ebab1987be1c92b2967571146cde09

Source:https://www.crowdanalytix.com/ethical-ai/
CrowdANALYTIX Resource Library