CIOReview
| | NOVEMBER 20218CIOReviewIN MY OPINIONRecent advancements in Big Data have encouraged enterprises to invest in data strategies to fully expand into the operationalization of ethical AI principles, policies, and frameworks. For example, Netflix saves $1 billion annually by using Big Data. This is evident by the amount of investments organizations are willing to make in the data domain, as it is heralded as the new currency of the digital age. However, real data is often biased, which means that there is an unexpected or unwanted correlation among different data features. This is because collecting real data is an uncontrolled process that naturally encodes biases present in the real world. The problem is, biased data can have serious implications for enterprises.To elaborate the above claim, let us look at an example. Enterprises use fraud detection solutions that heavily rely on transaction patterns, geo-location, and demographics data to identify unusual behaviours or suspicious activities. This may potentially lead to unintended as well as unknown data biases, such as a particular demographic group being flagged out for no apparent reason. If we ask ourselves, in this situation, how can the AI / ML models, their testing, and data handling process be made fair and ethical to ensure that there are no biases present in a dataset against a particular group of individuals while protecting against potential fraud, is an open question.Biased data can cause inaccurate AI / ML models to be trained and deployed, which can perpetuate discrimination. Regulations around the world are starting to get stricter on AI, especially those that could potentially harm the fundamental rights of citizens, such as being non-discriminatory. In April 2021, The European Commission (EC) proposed a brand-new regulatory framework to protect EU citizens from AI because of its huge transparency issues. This has widespread implications for businesses globally. Concretely, it affects foreign companies selling AI-based services in the EU and also, any providers of AI systems that are located in a non-EU country but their infrastructure is hosted in the EU, such as cloud services. Any non-compliance can result in heavy penalties of up to 6 percent of the annual revenue of a company.ETHICAL AI WITH SYNTHETIC DATABy Sachin Tonk, Director Data , Advanced Analytics and Privacy at Standard Chartered Bank
< Page 7 | Page 9 >