Como a IA deve ser regulamentada?

How should AI be regulated?

The adoption of inconsistent guidelines by random companies may not be enough to ensure that AI development does not put innovation or profits ahead of human rights and needs.

Imagem em destaque

Artificial intelligence (AI) is a technology with the potential to contribute to incredible gains in diverse fields, such as medicine, education and environmental health. But it also includes the potential for many types of misuse, including discrimination, bias, changing the role of human responsibility, and other ethical considerations. This is why many experts call for the development of responsible AI rules and laws.

Some companies have developed their own set of AI principles. ForMicrosoft , they are Fairness, Trust and Security, Privacy and Protection, Inclusion, Transparency and Accountability. The following video explains Microsoft's approach to responsible AI:

However, the adoption of inconsistent guidelines by random companies may not be enough to ensure that AI development does not put innovation or profits ahead of human rights and needs. However, who should determine the rules that everyone must follow? What values ​​will these rules reflect? And what should the rules be? These are important questions that cannot be fully examined here. But below we offer an introduction to some of the important issues and take a look at what is already being done.

Responsible AI Recognition

Responsible AI means different things to different people. Various interpretations highlight transparency, responsibility and accountability or compliance with laws, regulations and organizational and customer values.

Another approach is to avoid using biased data or algorithms and ensure that automated decisions are explainable. The concept of explainability is especially important. According to IBM explainable artificial intelligence (XAI) is “a set of processes and methods that enable human users to understand and trust the outputs and outcomes created by machine learning algorithms”.

Because of these different meanings, entities that produce rules and guidelines for the use of AI must carefully define what they hope to achieve. Even after making this decision, these entities must reflect on the complex set of issues involved in establishing the rules. They must consider questions such as:

  • Should ethical standards be incorporated into AI systems?
  • If so, what set of values ​​should they reflect?
  • Who decides which set will be used?
  • How should developers resolve differences between various sets of values?
  • How can regulators and others determine whether the system reflects stated values?

More attention should be paid to considerations related to the data that AI systems use and the potential for bias. For example:

  • Who is collecting the data?
  • What data will be collected and what will not be collected intentionally?
  • Who is labeling the data and what method are they using to do so?
  • How does the cost of collecting data affect what data is used?
  • What systems are used to oversee the process and identify bias?

The EU leads

In 2018, the European Union (EU) approved measures that guarantee users of online services some control over their own personal technological data. The best known is the General Data Protection Regulation (GDPR). The EU is once again at the forefront of ensuring the ethical use of AI, which can generate algorithms that process very personal information, such as health or financial situation.

Some EU efforts are meeting resistance. According to Brookings “The European Union's proposed regulation of artificial intelligence (AI), launched on April 21, is a direct challenge to Silicon Valley's common view that the law should sideline emerging technology. The proposal establishes a nuanced regulatory framework that prohibits some uses of AI, heavily regulates high-risk uses, and lightly regulates less risky AI systems.”

The regulation includes guidance on managing data and data governance, documentation and record keeping, transparency and provision of information to users, human oversight and robustness, accuracy and security. However, the regulation focuses more on AI systems and less on the companies that develop them. Still, the guidelines are an important step toward creating global AI standards.

Other initiatives

In addition to the EU, many other entities are developing regulations and standards. Here are some examples.

  • IEEE. The Institute of Electrical and Electronics Engineers (IEEE) published a paper titled Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. It addresses issues such as human rights, responsibility, accountability, transparency and minimizing the risks of misuse.
  • OECD. The Organization for Economic Co-operation and Development (DCE) has established principles on AI that focus on benefits for people and respect the rule of law, human rights and democratic values. They also embrace transparency, security and accountability.
  • FEM. The World Economic Forum (WEF) has developed a white paper titled AI Governance: A Holistic Approach to Implementing Ethics in AI. Its introduction states: “The aim (of this White Paper) is to outline approaches to determining an AI governance regime that promotes the benefits of AI, whilst also considering the relevant risks arising from the use of AI and of autonomous systems”.

In the US, the Department of Defense has adopted a set of ethical principles for the use of AI. They include five main areas, stating that the use of AI must be responsible, equitable, traceable, reliable and governable.

Governments and other entities may also consider alternatives and complements to regulation, such as standards, advisory panels, ethics officers, assessment lists, education and training, and self-monitoring requests.

What about compliance?

Another consideration in this discussion is: “Even if governments and other entities create ethical rules and laws about AI, will companies cooperate?” According to a recent Reworked article, “10 years from now, ethical AI design is unlikely to be widely adopted.” The article goes on to explain that business and political leaders, researchers and activists are concerned that the evolution of AI “continues to focus primarily on optimizing profits and social control.”

However, leaders and others concerned about this issue must continue to define what ethical AI is and create rules and guidelines to help others adopt these principles. While the company's values ​​engine will determine how much they will embrace these concepts, the consumer choice engine will determine whether those will not stay afloat.

If you liked this, be sure to check out our other articles on AI.

  • How to create an AI system in 5 steps
  • Hyperautomation
  • The Impact of AI on Software Testing: Challenges and Opportunities
  • Is it really easier to implement AI today?
  • How IoT, AI and Blockchain Lead the Way to a Smarter Energy Sector

Source: BairesDev

Conteúdo Relacionado

Deepfakes de IA: uma ameaça à autenticação biométrica facial
Vídeos deep fake ao vivo cada vez mais sofisticados...
Desenvolvimento de produtos orientado por IA: da ideação à prototipagem
Aprenda como os processos baseados em IA aprimoram o...
O Rails 8 está pronto para redefinir o Desenvolvimento Web
O Rails 8 sempre foi um divisor de águas...
Como os trabalhadores da Silver aproveitam o GenAI para qualificação
A GenAI está transformando a força de trabalho com...
Otimizando Processos Industriais: Técnicas Avançadas para maior eficiência
A otimização de processos industriais é um desafio constante...
Testes Unitários: Definição, Tipos e Melhores Práticas
Entenda o papel fundamental dos testes unitários na validação...
Teste de carga: definição, ferramentas e melhores práticas
Aprenda como os testes de carga garantem que seu...
Comparação entre testes positivos e negativos: estratégias e métodos
Aprofunde-se nas funções complementares dos testes positivos e negativos...
O que é teste de estresse? Levando o teste de software ao seu limite
Entenda a metodologia por trás dos testes de estresse...
Testes Ad Hoc: Adotando a espontaneidade no controle de qualidade
Descubra a imprevisibilidade dos testes ad hoc e seu...
Nacho De Marco agora é membro do Fast Company Impact Council
A nomeação de Nacho De Marco para o Fast...
Primeiro MPU single-core com interface de câmera MIPI CSI-2 e áudio
O mercado embarcado tem uma necessidade de soluções de...
A Importância da Inteligência Artificial Explicável (XAI) para Desenvolvedores
A Inteligência Artificial (IA) tem se tornado cada vez...
Entendendo Distribuições Multimodais em Testes de Desempenho
Ao relatar estatísticas resumidas para resultados de testes de...
Como Prevenir Alucinações em Aplicativos GenAI com Streaming de Dados em Tempo Real
Como você previne alucinações de grandes modelos de linguagem...
Roteamento de Consulta: Otimizando Aplicativos Generative AI Avançados
Nos últimos anos, a Inteligência Artificial Generativa (Generative AI)...
10 Armadilhas Comuns do Domain-Driven Design (DDD) que Você Deve Evitar
Domain-Driven Design (DDD) é uma abordagem estratégica importante para...
Framework mais utilizado no mercado atualmente: Explorando o Poder do Ionic
No atual cenário tecnológico, a escolha do framework adequado...
Back to blog

Leave a comment

Please note, comments need to be approved before they are published.