AI Is Everywhere—Can Standards Catch Up?

IEEE Computer Society Team
Published 02/03/2025
Share this on:

laptop with graphic

AI systems and applications are being unleashed across sectors without formalized accountability, impact assessment, or regulatory oversight of key ethical issues. As the authors of a 2024 Computer article point out, this makes voluntary standards and independent scrutiny all the more imperative.

The article, “Artificial Intelligence For the Benefit of Everyone,” reports on a 2023–2024 review of the IEEE Standards Association’s specific, measurable, achievable, relevant, and time-bound (SMART) criteria to evaluate the ethics of AI systems in four key areas: accountability, algorithmic bias, transparency, and privacy.

The review was motivated by the unprecedented growth and evolution of AI since the SMART criteria were developed in 2018–2021. Following is a brief overview of the reviews findings in four key SMART areas.

Accountability


The accountability area is aimed at keeping humans in the loop and on the hook for AI systems and their decisions, actions, errors, and outcomes.

Over the past three years, accountability has been the subject of national and sector-specific regulatory actions, from the European Union’s Artificial Intelligence Act to the U.S. Blueprint for an AI Bill of Rights.

As the article’s authors note, the regulatory discourse and stakeholder positions on accountability provide a clearer picture on societal and legal expectations, including on the purpose of AI use in society. It also highlights the need for

  • defining high-risk use environments, risk mitigation, and prevention measures;
  • determining how to allocate responsibility to different actors in the AI lifecycle—including developers, deployers, and end users;
  • greater emphasis on safety, health, and human rights; and
  • integration of sustainability considerations throughout the AI lifecycle.

Algorithm Bias


Ethical and unethical algorithm biases play a crucial role in AI applications.

  • Ethical biases help applications serve the user’s goal. Search engine results, for example, are biased based on the user’s query, which helps the application deliver useful results.
  • Unethical biases produce results that are unfair and harmful to users and society. Examples here include facial recognition systems that routinely misidentify people of color and hiring algorithms that favor male candidates.

As the authors note, the distinctions between ethical and unethical biases “hinge on the bias’s purpose, context, and impact on stakeholders” and thus make human oversight essential.

The SMART criteria updates and notes reflect these issues and include

  • developing and using mechanisms to ensure that AI system biases are beneficial;
  • ensuring that goals, inhibitors, and foundational ethical requirements cover the full spectrum of the AI system life cycle, from development through decommission.
  • expanding the goals for governance, human oversight, and risk and knowledge management throughout the system life cycle.

Transparency


Ethical transparency is essential to ensuring accountability and to identifying and addressing biases. This transparency entails a visible decision-making process for AI systems that is understandable to users and fosters trust.

The SMART criteria updates and notes reflect these issues and include

  • emphasizing the importance of human judgment in determining appropriate transparency levels;
  • recognizing that transparency issues can arise throughout an AI system’s lifecycle;
  • enhancing data oversight, knowledge governance, and risk management.

Privacy


The existing SMART privacy criteria focuses on established legal concepts—including the right to confidentiality and to data privacy, protection, and security. It also emphasizes the context and culture in which an AI system is used.

The SMART criteria review took a more holistic view of ethical privacy, understanding its intrinsic link to an individual’s self-expression, personhood, ethics, values, and personal safety and security.

The SMART criteria updates and notes reflect these issues and include

  • acknowledging the complex interplay between technological innovation and the diverse, often deeply personal, aspects of privacy;
  • recognizing that privacy issues cover the entire AI system lifecycle, including points such as after AI modifications, after decommissioning a service, at the end of a contractual relationship, and following a person’s death; and
  • privacy protections gain depth when considered in conjunction with ethical transparency, accountability, and algorithmic bias.

Dig Deeper


As the authors of this important AI ethics update note, if the polarization of AI accelerationists vs. de-accelerationists offers a credible measure, AI system harms and benefits are neither fully mapped nor likely to be equitably shared between the developers/service providers and the societies across the globe consuming these exponentially growing and evolving AI products.

“Artificial Intelligence For the Benefit of Everyone” discusses this and other ethics-related issues in depth; it also provides details on each of the four key workstream areas.
To dig even deeper, join other AI experts, researchers, government officials, and enthusiasts at the international IEEE Conference on Artificial Intelligence (IEEE CAI) 5–7 May 2025 in Santa Clara, California.

In addition to showcasing the latest AI research and breakthroughs, IEEE CAI emphasizes applications and key subject areas, from sustainability and human-centered AI to issues and industry-specific applications in healthcare, transportation, and engineering and manufacturing.