AI has become a focal point when it comes to technology advancements. What impact has it currently had when it comes to bias in data? With events over the last few months, including the way people have changed their behaviour, AI systems are struggling to adapt to the new ways. Let’s take a look at driving powerful AI to be equality ethical.

It’s hard to deny that Artificial Intelligence (AI) has a considerable part to play in our daily lives. It runs a lot of what we do. When it comes to marketing and the teams who run marketing campaigns, it has indeed become a game-changer. AI can take data on a large scale, process it, and then initiate numerous behaviours. From massive email campaigns to simple text messages. All this is done in an automated fashion, with AI effectively taking the hard part of marketing from those who have to do it every day.

It’s a thought to consider when you think computers and data at this current stage can’t think in a way that humans would process information and data.

Source: RBC Disruptors / AI for Good: Battling Bias Before it Becomes Irreversible

Recently, however, AI’s biggest strength is the ability to manage data. This has started to get some people very concerned, however. In a perfect world, AI would be ‘clean’, objective and free of bias. But that may not always be the case. When AI makes decisions using data, it sometimes makes biased decisions. Computers don’t have emotions after all.

To understand how big the problem could become, and it’s the potential impact on marketing, we have to go back to what has happened. Let’s venture back when AI did more harm than good.

Identifying Bias In Your AI

Since all data that is collected carries some form of bias, it needs to be looked at and determined to see if it crosses and sensitive variables.

Is gender and race being impacted?
– Are there changes in location and region?
– What population is being targeted or is the location being tightened to local areas.

Once you have located the bias in the data if there is one, it then needs to be taken apart and eliminated.

Bias In The Frontline

Perhaps the most high-profile group as regards bias right now is the police. Over the last few months, police departments in the US have been involved in some tragic cases where black people have been unfairly treated, sometimes to the extent where they are killed. These cases are complex, but they do come down to the same principle. Bias can drive decision-making, and it can be instrumental with that driving.

Many police departments in the US utilise predictive policing, employing data that brings up variables around areas such as gender, net worth, and race. In some cases, this data is then used with preconceived ideas that are fed by other data. If a person who is of black descent is five times more likely to be arrested in a neighbourhood, and always has been, that statistic will influence who and why police confronted them. Unfortunately, it’s not always the right reason.

The Black Lives Matter movement has seen an increase in awareness when it comes to bias in AI and bias in general when it comes to policing and facial recognition. Getting AI right is difficult but it has been more important than ever before.

Data is meant to be objective, but data is no longer just data. It’s machine learning; it’s algorithms. This means that information is collected, and then read by a machine. That machine then makes decisions. In the worst possible scenarios, AI is not just crunching numbers; it’s making bad decisions very quickly and very efficiently.

This has a clear resonance for marketing too. Bias always needs to be looked at when driving powerful AI to be equality ethical.

Marketing And Data

Ever since Google and Facebook started to make it clear that data drove them, the whole thing has exploded. Companies that sell and use ‘big data’ to help them sell has become a normal process in this modern world. Companies use that same big data to help them market and find new customers.

A great insight into how AI and bias are being looked at in a deeper light.

MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures.

Source: TED Talks

When it comes to our marketing, automated marketing platforms are at risk of building bias into the activity. An algorithm may note that people from a particular area have lower incomes than those from a place on the other side of town. It may then ignore that area when offering loans, for example (if that is what the company does) because the algorithm has decided that these families and households cannot repay loans.

Algorithms can do this, and it means that a predominately white area, for example, with some people who can pay loans off among those who cannot, is ignored in the loan allocation process. Or it could be an area with black families in it. Either way, an injustice has been done.

Some algorithms even have raced as a primary variable that they use for decision-making. These algorithms may judge black people to be of low-income and completely miss them out when considering loan allocations. Again, it happens. And it happens because algorithms utilise vast amounts of data and are trusted to make critical decisions.

Should We Be Making Assumptions?

We need to be looking at the assumptions that are being made by groups of data. To make sure that bias is not in the data both we and you need to understand the impact the data could have.

Ask yourself the question:

– What assumptions are we making about the people that are being affected by the data that has been collected?
– Where is the data being collected from?
– Who is being affected the most from this data being collected?
– Is enough data being collected to make accurate and impactful assumptions?

To make sure and determine if you are making those assumptions based on data like race, gender and location you need to be aware of the data that is being collected. Making assumptions could lead to negative results when it comes to driving powerful AI to be equality ethical.

How To Avoid The Issue

Maybe it is fair to say that, with data, it could well be impossible always to avoid bias. However, there are plenty of things you can do to make the situation more positive, and help protect against your use of data being problematic in this area.

Fairness in AI and data should be on a level scale. Biased data leaves one scale larger than the other. Make sure that your synthetic data is one a level playing field.

First of all, accept that your data will naturally contain some bias. It’s been collected to identify specific people and interests after all. So you will have segments that focus on variables, simple as that.

Once you accept this, ensure that data is carefully monitored, so that areas such as race and gender (among others) are not affected negatively. Just doing this alone ensures that natural bias is present as long as it is not something that has a negative impact.

Dive Into Engagement

As you start to reduce the negative bias on the way towards eliminating it, begin to engage with your audience more effectively, and more openly. Seeking feedback from your audience has a hugely positive impact. Dealing with issues before they arise, and helping your audience to feel comfortable is ten times more effective than putting out fires.

Want To Learn More About AI?

If you are interested in learning more about AI and the effects it can have when it comes to marketing and emails. Have a look at our blog post ‘Alexa, Read My Emails’


Moving forward, invest in training and education for all members of all teams. People need to know that bias is present and that it can be increased over time if people do nothing.

Educating all team members so that they can spot data that may be affected by bias helps them to make the future more promising. Reducing bias is possible. Deleting it may not be, but things will be a lot more inclusive with your data if you have education as a cornerstone of your strategy. Education is key to driving powerful AI to be equality ethical.

Book An Audit Today!

Finding the best solution is vital to develop your Salesforce apps further. Gravitai are here to help with the immense amounts of support that is needed to make Salesforce App flourish!