TECHNOLOGY

Americans need a right to an AI-driven world


In the past For decades, data-driven technology has changed the world around us. We’ve seen how much information is collected and trained in artificial intelligence to interpret it: computers that learn to translate languages, facial recognition systems that unlock our smartphones, algorithms that detect cancer in patients. The possibilities are endless.

But these new tools have also led to serious problems. What machines learn will depend on many things – including the data used to train them.

Data sets that fail to represent American society can be virtual assistants who do not understand the Southern accent; Oral recognition technology that leads to erroneous, discriminatory arrests; And healthcare algorithms that discount the severity of kidney disease in African Americans, which prevents people from having kidney transplants.

Training machines based on previous examples can integrate the prejudices of the past and enable the inequality of the present. Employing tools to learn the characteristics of a company’s employees may reject applicants who are different from existing employees despite having good performance – for example, female computer programmers. Mortgage approval algorithms for determining credit eligibility can easily assume that certain home zip codes are related to race and poverty, expanding housing inequality for decades in the digital age. The AI ​​may recommend medical assistance for all groups who frequently access hospital services, compared to those who need it most. Arbitrarily training AI in Internet conversations can lead to “emotion analysis” that sees the words “black,” “Jewish” and “gay” as negative.

These technologies also raise questions about privacy and transparency. When we ask our smart speakers to play a song, are our kids recording what they are saying? When a student takes an exam online, should their webcam monitor and track their every move? Do we have the right to know why we were denied a home loan or a job interview?

In addition, there is the problem of intentional misuse of AI. Some dictators use it as a tool of state-sponsored oppression, division and discrimination.

In the United States, some failures of AI may be unintentional, but they are serious and they already affect marginalized individuals and communities unequally. They often do not use the appropriate data set of AI developers and do not have a comprehensive auditing system as well as have different perspectives around the table so they can estimate and solve problems Before Products are used (or to kill products that cannot be modified).

In a competitive marketplace, cutting corners can seem easy. But it is unacceptable to create AI systems that will harm many people, just as it is unacceptable to create pharmaceuticals and other products – such as cars, children’s toys or medical devices – that will harm many.

Americans have a right to expect better. We need strong technology to respect democratic values ​​and adhere to the central principle that everyone should be treated fairly. These ideas can help ensure coding.

Soon after our Constitution was ratified, Americans enacted a Right to Protection Act against the powerful government we created. We need to redefine, reaffirm and gradually extend these rights throughout our history. In the twenty-first century, we need a “bill of rights” to protect against the powerful technology we create.

Should our country be clear about its rights and freedoms, we hope that information-dependent technologies will be respected. Exactly what these are will need to be discussed, but here are some possibilities: the right to know when and how AI is influencing decisions that affect your civil rights and civil liberties; Your freedom from being subjected to AI has not been carefully audited to ensure that it is trained in accurate, impartial and adequately represented data sets; Your freedom from widespread or discriminatory surveillance and observation in your home, community and workplace; And the right to take meaningful refuge if the use of algorithms harms you.

Of course, calculating rights is a first step. What can we do to protect them? Chances are that the federal government refuses to buy software or technology products that fail to respect these rights, requiring federal contractors to use technology that complies with these “rights bills” or adopts new laws and regulations to fill the gaps. States may adopt a similar practice.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button