Google says it will address AI, machine learning model bias with technology called TCAV

Algorithms will need transparency governance to avoid unintended consequences and risk
ZDNet’s Larry Dignan caught up with Wharton professor Kartik Hosanagar to talk about his book, “A Human’s Guide to Machine Intelligence” and his arguments for transparency and regulation for algorithms and artificial intelligence and what companies need to do to avoid unintended consequences.

Google CEO Sundar Pichai said the company is working to make its artificial intelligence and machine learning models more transparent as a way to defend against bias.

Pichai outlined a bevy of artificial intelligence enhancements and moves to put more machine learning models on devices, but the bigger takeaway for developers and data scientists may be something called TCAV. TCAV is short for Testing with Concept Activation Vectors. In a nutshell, TCAV is an interpretability method to understand what signals your neural network models use for prediction.

In theory, TCAV’s ability to understand signals could surface bias because it would highlight whether males were a signal over females and surface other issues such as race, income and location. Using TCAV, computer scientists can see how high value concepts are valued

Bias is a critical concept in AI and some academics have called for more self-governance and regulation. In addition, industry players such as IBM have pushed for more transparency and a layer of software to monitor algorithms to see how they work together to produce bias. Meanwhile, enterprises are striving for explainable AI. For Google, transparency matters because of its technologies such as Duplex and the next-gen Google Assistant. These tools are increasingly able to carry out tasks for you. Transparency of the models can mean more trust and usage of Google technology. 

Bottom line: Transparency and defending against bias will be critical for enterprises as well as all the cloud providers that will be providing most of our models as services

TCAV, which doesn’t require models to be retrained to use it, is an effort to dissect models and illustrate why a model is making a certain decision. For instance, a model that identified a zebra may identify it using more high level concepts. Here’s an illustration.

google-ai-zebra-model.png

“Building a more helpful Google for everyone means addressing bias. You need to know how a model works and how there may be bias. We will improve transparency,” said Pichai.

He added that Google’s AI team is working on TCAV, a technology that will allow models to use more high-level concepts. TCAV’s goal is to illustrate the variables that underpin a model.

“There’s a lot more to do, but we are committed to building AI in a way that works for everyone,” said Pichai.

special feature


Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read More

With its ability to cram model size down so they can reside on a device, Google is working to lower latency and using techniques such as federated learning to use less data and enhance user privacy.  

More on AI bias:

Primers: What is AI? | What is machine learning? | What is deep learning? | What is artificial general intelligence?    

Source Article from https://www.zdnet.com/article/google-says-it-will-address-ai-machine-learning-model-bias-with-technology-called-tcav/#ftag=RSSbaffb68
Google says it will address AI, machine learning model bias with technology called TCAV
https://www.zdnet.com/article/google-says-it-will-address-ai-machine-learning-model-bias-with-technology-called-tcav/#ftag=RSSbaffb68
http://www.zdnet.com/blog/rss.xml
Latest blogs for ZDNet
Latest blogs for ZDNet
https://zdnet3.cbsistatic.com/fly/bundles/zdnetcore/images/logos/zdnet-144×144.png

Article written by

great guy, love the news