Whether Artificial Intelligence is trouble Free?

Whether Artificial Intelligence is trouble Free?
There are many questions about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds. Remember, those movies, wherein the hero is always protecting sci-fi tech from those who want to use it for destruction. This could become a real-life scenario sooner than expected.

There are many questions about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds. Remember, those movies, wherein the hero is always protecting sci-fi tech from those who want to use it for destruction. This could become a real-life scenario sooner than expected.

“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.” – Colin Angle

Artificial Intelligence is a collection of technologies that enable machines to sense, comprehend, act, and learn on their own or by augmenting human activities. Human beings possess five basic senses namely sight, hearing, taste, smell and touch. As of now machines can do sight & hearing/talking very well. In other words, machines can now recognize objects, navigate (a car), collaborate (translate languages), analyze data and recognize patterns (Detecting crop diseases). So, AI is accelerating automation of residual manual processes which will ensure these processes being more accurate and faster.

AI held promise on two counts : It’s supposed to free us from life’s mundane routines and offer slivers of improvement over what humans can do. Let’s see whether this holds true going by the recent happenings :

Self-Driving Cars

Authors Benjamin Wilson, Judy Hoffman & Jamie Morgenstern in their study “Predictive Inequity in Object Detection” looked at a simple question of how accurate are the state-of-the-art object-detection models? And for this purpose, they looked at a large dataset of images that contain pedestrians. They divided up the people using the Fitzpatrick scale, a system for classifying human skin tones from light to dark. Then, the researchers analyzed how often the models correctly detected the presence of people in the light-skinned group versus how often they got it right with people in the dark-skinned group. And the result?

Human beings possess five basic senses namely sight, hearing, taste, smell and touch. As of now machines can do sight & hearing/talking very well. In other words, machines can now recognize objects, navigate (a car), collaborate (translate languages), analyze data and recognize patterns (Detecting crop diseases).

Detection was five percentage points less accurate, on average, for the dark-skinned group. And the disparity persisted even when researchers controlled for variables like the time of day in images or the occasionally obstructed view of pedestrians. In other words, autonomous vehicle programmers may be unintentionally including their biases–even selection biases they don’t recognize – while creating algorithms. As a Vox article rightly suggested, since algorithmic systems ‘learn’ from the examples they’re fed, if they don’t get enough examples of, say, black women during the learning stage, they’ll have a harder time recognizing them when deployed.

So in addition to worrying about how safe they are, how they would handle tricky moral trade-offs on the road, and how they might make traffic worse, now we also need to worry about how these Self-Driving Cars could harm people of color. If you’re a person with dark skin, you may be more likely than your white friends to get hit by a self-driving car, according to this study. Fine. One might say that it’s an early study and not yet peer-reviewed etc., So let’s look at another example :

Amazon Experience

Amazon, a Seattle-based company, created a hiring algorithm to filter through hundreds of resumes and show the best candidates. Employees had programmed the tool in 2014 using resumes submitted to Amazon over a 10-year period, the majority of which came from male candidates. Based on that information, the tool assumed male candidates were preferable and downgraded resumes from women. In addition to the gender bias, the tool also failed to suggest strong candidates. The company decided to scrap the project later. All this is not to say that self-driving cars are racist, or that automated hiring systems are sexist. Instead, let’s consider this as a god given opportunity to assess some of the real challenges with using A.I. that we thought would solve themselves.

Takeaways

Algorithms can reflect the biases of their creators. The insights from the above two add to a growing body of evidence about how human bias seeps into our automated decision-making systems, called algorithmic bias. For instance, Google’s image-recognition system labeling African Americans as “gorillas”, in 2015. And three years later, Amazon’s Recognition system drew criticism for matching 28 members of Congress to criminal mugshots. Since algorithmic systems “learn” from the examples they’re fed, if they don’t get enough examples of, say, black women during the learning stage, they’ll have a harder time recognizing them when deployed.

What can be done?

More heavily weighting that sample in the training data can help correct the bias, i.e., including more dark-skinned examples in the first place. And for the broader problem of algorithmic bias, there are few commonly proposed solutions. One is to make sure teams developing new technologies are racially diverse. If all team members are white, male, or both, it may not occur to them to check how their algorithm handles an image of a black woman. But if there’s a black woman in the room, it will probably occur to her. Another solution is to mandate that companies test their algorithms for bias and demonstrate that they meet certain fairness standards before they can be rolled out.

Conclusion

Anja Kaspersen, Head of International Security, World Economic Forum, pointing at a survey of AI researchers by TechEmergence (via Medium) points out how it poses an array of security concerns which could be curbed by timely implementation of norms and protocols. There are many questions raised about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds. Remember, those movies, wherein the hero is always protecting sci-fi tech from those who want to use it for destruction. This could become a real-life scenario sooner than expected.

Talking about the dark side of the deep web, the report clearly points out how destructive tools like 3D printed weapons already exist for sale. Another scenario highlighted in the report asks you to imagine a gun combined with a quadrocopter drone, a high-resolution camera, and a facial recognition algorithm which can detect specific faces or say targets and assassinates them as it flies across the skies.

Dr. A Jagan Mohan Reddy

Professor (HR) at Visiting Faculty

View all posts

Author

Dr. A Jagan Mohan Reddy

Professor (HR) at Visiting Faculty

April 2024

error: Content is protected !!