Published on
May 30, 2024

Streetbees Shares: Gavin Harcourt on the ethical use of AI

Group of people asking questions

What ethical principles should guide AI development and how can they be implemented? 

 

Ethical AI development should prioritise transparency, fairness and privacy. It’s important to communicate clearly how AI models are being used and why, and what data is collected and used by them. AI systems should avoid bias and discrimination and should follow the same safeguards and standards in place for traditional data processing. 

 

These principals should be implemented from the start and maintained throughout the AI lifecycle. This needs to be fostered throughout an organisation with rigorous training and education on ethical considerations, best practices, and regulatory requirements related to AI development and deployment. Ethical impact assessments should be conducted, enabling organisations to identify potential ethical risks and implications of AI projects, and take proactive measures to address them. When engineering a solution, the output of these assessments and safeguards should be incorporated into the development and AI model testing processes to ensure ethical principles are being met.

 

How can transparency and accountability be fostered in AI systems? 

 

Transparency and accountability within AI systems is crucial when it comes to building trust with customers and maintaining the integrity of the research outcomes.  

 

As well as publishing the sources and scope of training data used for the creation, it's important to consider the audit trail and availability for review of the input and output to the model as it's used. In a chat-based interface, an end-user can easily see this. If AI is being used behind the scenes as part of a data processing pipeline or to generate content, however, then the inputs and outputs used to generate, process or review data should be recorded in an audit log and made available where possible. 

  

What roles should different stakeholders play in shaping the ethics of AI? 

It’s important to include a broad range of stakeholders in bringing an AI-driven product to market, even more so than with a traditional product where established conventions, legislation and best practices can inform us of the right path. 

The creators and developers of AI systems have an overall responsibility to hold inclusive discussions with everyone impacted by their actions. These conversations will have a profound impact on the conventions and direction that AI use takes.

As understanding of AI systems and the impact they might have is still limited within many stakeholder groups, it is essential that the experts and leaders in this space ensure education and consultation regularly take place.

 

How can we ensure advanced AI systems remain aligned with human values? 

 

The use of AI should be to enhance, not replace, human oversight in market research processes. We have to ensure that a human layer is always factored into the process and that data collected is reviewed and approved by humans. AI systems should provide the tools and functionality to facilitate close alignment with the human values of fairness, transparency, privacy, and accountability.  

 

Consumers are more acutely aware of their data and want oversight of how it is being collected and used by companies. Market research agencies should adopt a principle of privacy first to ensure data from customers isn’t used in the training of AI models without express and informed consent. Transparency is essential for enhancing trust. Many brands use data to develop a clear understanding of their customer base to inform future marketing or product development strategies. Brands need visibility of how AI is used to support the final research output to ensure they have an accurate representation of what their customers want from them.