AI's Baked-In Bias: What to Watch Out For
Law360
Share
Following the Biden administration's Oct. 30 executive order on artificial intelligence (AI), Jonathon A. Talcott and Jonathan P. Hummel wrote an Expert Analysis Piece for Law360 in which they examined the risk of baked-in bias in AI systems.
"One of the key drivers behind federal action is the recognition that AI systems are susceptible to various forms of bias," they wrote. "At a very high level, an AI system includes data, training protocols, one or more models (e.g., a large language model or an image classification model), an inference process, and a monitoring and feedback process. This is of course oversimplified, but it’s important to note that each of these components—and others not explicitly listed—are potential sources for bias. For example, AI systems are vulnerable to bias stemming from data, algorithms and human training."
Messrs. Talcott and Hummel are members of Ballard Spahr's Intellectual Property Department and its Artificial Intelligence Initiative.
Read the full article here.