İçindekiler
National Security Commission on AI, headed by tech leaders including Eric Schmidt, Andy Jassy, Eric Horvitz, Katharina McFarland and Robert Work, released its massive report on accelerating innovation while defending against malign uses of AI. Transparency surrounding the creation and training of AI models has fallen into doubt, especially with the rise of digital sweatshops. This term highlights how tech companies often outsource the training of AI to laborers in other countries while underpaying them and applying other exploitative methods. These practices may fly under the radar as issues like data privacy steal the spotlight, revealing that the scope of ethical AI must expand. The challenge that AI ethics managers faced was figuring out how best to achieve “ethical AI.” They looked first to AI ethics principles, particularly those rooted in bioethics or human rights principles, but found them insufficient.
HireVue, a hiring platform that uses AI for pre-hire assessments and customer engagement, has created an AI explainability statement that it shares publicly. When a company decides to proceed with using AI in its business model, then the next step should be to articulate the organization’s values and rules around how AI will be used. Companies should consider how the use of AI will affect the people who use the product or engage with the technology and aim to use AI only in ways that will benefit people’s lives. The process of creating and training AI models requires large amounts of natural resources, polluting the soil and leaving behind a hefty carbon footprint. Tech can play an essential role in supporting sustainability initiatives and achieving carbon neutrality, but the industry has a long way to go in this area. This is likely to be disappointing news for organizations looking for unambiguous guidance that avoids gray areas, and for consumers hoping for clear and protective standards.
In the United States, individual states are developing policies, such as the California Consumer Privacy Act (CCPA), which require businesses to inform consumers about the collection of their data. This recent legislation has forced companies to rethink how they store and use personally identifiable data (PII). As a result, investments within security have become an increasing priority for businesses as they seek to eliminate any vulnerabilities and opportunities for surveillance, hacking, and cyberattacks. As part of that change, ethical reviews for Google’s most advanced AI models, such as the recently released Gemini, fall not to RESIN but to Google DeepMind’s Responsibility and Safety Council, according to a technical paper published last month. RESIN has already left a mark on Google’s generative AI products, a company report says, such as by triggering a decision to limit Bard from using personal pronouns to try to avoid users treating it like a human.
Interestingly, Floridi uses the backpropagation method known from Deep Learning to describe the way in which responsibilities can be assigned, except that here backpropagation is used in networks of distributed responsibility. When working in groups, actions that are on first glance allegedly morally neutral can nevertheless have consequences or impacts—intended or non-intended—that are morally wrong. Although there exists a significant amount of overlap between the main debates in the field of AI ethics and the principles that are most commonly mentioned in AI ethics guidelines, these guidelines do not cover topics like machine ethics, the moral status of AI systems, or technological singularity. Moreover, while there is wide agreement on what ethical principles should be reflected in the development and use of AI, the views on these other moral questions in AI are much more varied. All these topics address, each in their own way, the question of how we should relate to AI and exercise control over it. AI has the potential to become an unprecedentedly powerful technology, due to its intelligence, ability to function autonomously and also widespread reliance on technology.
Paula Boddington’s book on ethical guidelines (2017) funded by the Future of Life Institute was also not considered as it merely repeats the Asilomar principles (2017). For people who work in ethics and policy, there might be a tendency to
overestimate the impact and threats from a new technology, is ai ethical and to
underestimate how far current regulation can reach (e.g., for product
liability). On the other hand, there is a tendency for businesses, the
military, and some public administrations to “just talk”
and do some “ethics washing” in order to preserve a good
public image and continue as before.
Two example prominent AI related regulations are the “Right to Explanation” clause in the EU General Data Protection Regulations and the relevant portions of the California Consumer Privacy Act. At a more local level, cities in the US are making decisions about the use of algorithms, particularly those used in law enforcement. One of the largest legislative efforts in AI is the upcoming AI Act in the European Union. As mentioned earlier with the Lensa AI example, AI relies on data pulled from internet searches, social media photos and comments, online purchases, and more.