Organizations that develop and use artificial intelligence systems have an impact on a variety of facets of our life, including our cognitive biases, security, and more. Because of this, it’s even more crucial for deep tech startups to use these tools carefully.

Amazon stopped using its hiring tool in 2018 after it was discovered to be biased towards women. What makes the artificial intelligence (AI) tool dislike female candidates? When the engineers looked into it, they discovered that the AI had been taught using data from a time when men predominated in the tech industry. In the process, the AI “learned” that male prospects were more desirable. In addition, its machine learning (ML) model had discovered that resumes including the word “women’s” as in “women’s chess club” were penalized. And for that reason, it exclusively suggested male prospects.

Amazon ceased using the tool once the problem was discovered, but it has since become a shining example of how not to use AI systems. However, even five years later, this incident’s significance is still relevant today. For instance, according to Accenture’s 2022 Tech Vision Research report, only 35% of people worldwide trust how AI is applied across businesses. And roughly 77% think that businesses need to be held accountable for any AI misuse.

The responsible use of AI inside the Indian start-up ecosystem also raises significant concerns, even if big IT businesses have greater expertise working with new and possibly dangerous tech.

In comparison to the US and China, India has the third-largest start-up ecosystem. According to projections by Nasscom and consultant Zinnov, there will be roughly 3,000 digital start-ups in India by 2021, with the majority working in the area of artificial intelligence (1,900).

But do tech startups understand the responsibility that comes with implementing AI/ML? Experts advise that Responsible AI (RAI) values should be ingrained from the start. According to Srinivas Rao Mahankali, CEO of start-up innovation center T-Hub, “early-stage start-ups sometimes don’t necessarily pay mind to compliance since they have so much to accomplish,” adding that any such omission may later come back to bite them.

On the condition of anonymity, an investor who has invested in numerous start-ups claims that discussions about RAI don’t take place at the early phases. The investor claims that founders are frequently aware of the dangers of such technology being misused but are ill-prepared to avoid them. Stakeholders also point out that although knowledge of these issues is present at the executive level, it must filter down the corporate pyramid to prevent their negative impacts in actual circumstances. But how does one begin, then?

The effect of intelligence

Let’s start with the fundamentals. RAI is what? It is the discipline of creating and implementing AI with good intentions—to empower people and organizations and equitably influence customers—and scaling AI with confidence and trust, according to professional services company Accenture.

RAI, also known as reliable AI or moral AI, has several uses. Retail preferences, inventory management, cybersecurity, healthcare, banking, etc. are a few areas in India where RAI is used.

“In healthcare, AI is utilized to uncover complex health trends and to offer insights for medical diagnosis. In these situations, there are three main issues to be aware of: safety, bias, and explainability (the idea that an AI/ML model and its output can be presented in a way that “makes sense” to a human being). AI is used in banking to look for fraud and analyze customer behavior. Fairness and privacy concerns are also raised by this, according to TCS Research Chief Scientist Sachin Lodha.

These application cases demonstrate how RAI impacts society at large, particularly end consumers, as well as businesses. AI has the potential to alter resource allocation and policy choices, among other things, directly or indirectly. It is essential to evaluate how they may be made more transparent, equitable, accurate, risk-free, and bias-free.

The creation of prototypes and testing them on different parameters including behavioral patterns, fairness, explainability, and more are some straightforward actions that businesses can take to build RAI. attempting to remove the observed biased patterns after RAI, in its simplest form, is the process of creating systems that distill what has occurred in order to recognize potential problems and then take proactive measures to address them. Starting businesses must, however, exercise caution when using data in their models.

According to Achyut Chandra, Manager and Lead of HCL’s Open Innovation, an AI is typically fed a lot of data, and it makes calculations using that data. He claims that this is why it’s crucial to be explicit. It is crucial to take into account the features we are extracting from the data sets, he continues.

In the case of Amazon, this is precisely what took place. It had been provided information from a time when male candidates dominated the field. And depending on the inputs it had received, the AI was displaying the results. Nasscom’s AI Head Ankit Bose advises that in order to address this, start-ups should work to fully comprehend the biases in data to the point where they begin developing their own data sets at the source level.

The secret to self-control

The performance of AI systems can currently be checked with a variety of tools (from Google, Microsoft, etc.), but there is no regulatory monitoring. Therefore, according to experts, both new and established businesses should give self-regulation more thought.

For instance, computing firm TCS has developed its RAI models by conducting pilots with a variety of players and conducting routine audits of its operations. According to Lodha, this has produced an API-fied version of a tool that is fungible, extendable, and applies to several kinds of AI models, including computer vision, natural language processing, sequence-to-sequence models, etc.

Conducting thorough audits is another technique to implement checks and balances. According to Lodha, “a timely validation of compliance to any changes in standards in the RAI domain is essential as well as a frequent assessment of their operations.” He also considers the Strategic Pillars of the National AI Initiative in the United States or the High-Level Expert Group on AI Ethics Guidelines of the European Commission as useful starting points.

Aside from tech behemoths, numerous well-known start-ups have also found ways to use RAI. Icertis, a SaaS unicorn company that offers contract lifecycle management (CLM) solutions, says Explainable AI and Ethical AI were integrated into their AI systems a few years ago, according to Monish Darda, CTO and Co-Founder. Icertis can correlate the findings of their AI with the data that produced them thanks to explainable AI, which enables Icertis to explain the results generated by their AI. This has been really successful for us, he says, as it allows us to see how we arrived at a prediction and what we missed. Icertis can reassure its users and clients that the data used to train its AI tools was obtained with consent and is being utilized for the intended purpose thanks to ethical AI. The platform further guarantees the objectivity of the data set by selecting a representative sample of it after taking into account variables such as geography, culture, etc.

Additionally, there is a strong focus on enlisting experts from other fields to create AI systems that are fair and just. “It is vital to consistently ask questions, involve professionals—such as software developers, data scientists, legal experts, the founders, etc.—and progress together,” adds a Google representative, “to promote the good applications of AI.” One of the biggest cornerstones is awareness. We need individuals who are knowledgeable about RAI, according to Bose, who also explains that when people are knowledgeable about RAI, they will be able to see the ways in which AI models can process data similarly to humans and provide tools for when problems arise.

It isn’t magical

RAI, according to Icertis’ Darda, is neither magic nor a place to go. He claims that a business must always use AI safely and predicts that it will take about 10 to 15 years to master RAI. But we will get closer to understanding how to get rid of prejudices, he claims. From his personal experience, Darda supports T-Hub CEO Rao’s assertion that business owners can occasionally become so enamored with technology that they neglect to consider how they handle their data and algorithms. So, ignoring the subject of RAI is not an option. Startups must take the initiative because by not adhering to the necessary standards, they are endangering users.

Chief Data Scientist of the data intelligence firm Fractal Analytics, Akbar Mohammed, issues a warning against the application of AI in the field of mental healthcare. The way you converse with your friends, the news you read, and other factors can help AI today determine if you have a mental health issue. And that intelligence may be misused as well as used to support you if you need it. Bad actors can exploit it to lower your marketability prospects for employment.

So what is the next step? Shantanu Narayen, chairman and CEO of the software company Adobe, acknowledges that “it’s hard.” Darda continues by saying that a company’s ability to think about RAI from the designing stage will determine its success or failure. According to him, responsible tech deployment is just as crucial as product-market fit, consumers, financing, and other factors. “You need to consider the potential biases and unforeseen repercussions. I would suggest that start-ups have larger and more robust data sets, conduct plenty more testing, and conduct numerous pilots,” continues Narayen.

The foundation upon which all parties involved may agree is intent and the purpose for which an AI model is created. Additionally, startups that work with significant amounts of data, procedures, and other information must get it right from the start. It’s still not too late, even today. Why? Because the first step is to ask yourself, “Am I using AI responsibly?”