Consumer Rights On Fair And Responsible AI

Consumer Rights On Fair And Responsible AI
Consumer Rights On Fair And Responsible AI

The inception of Artificial Intelligence (AI) enhanced consumer services while raising concerns about safeguarding consumer rights and fairness.

The inception of Artificial Intelligence (AI) enhanced consumer services while raising concerns about safeguarding consumer rights and fairness. Despite its potential, AI poses risks such as spreading harmful content, legal issues over copyright and unlawful disclosure of private information. 

By adhering to ethical guidelines, including transparency, fairness, privacy protection, and accountability, we can mitigate risks and ensure consumer welfare, improving user experience and strengthening consumer protection. Prioritizing consumer rights in the age of AI is paramount, ensuring that technological advancements align with ethical guidelines that benefit the society.

Consumer rights in Kenya are guided by Article 46 of the Constitution of Kenya which states that both public and private entities have a responsibility to uphold the principle of protecting the buyer who has entered into a contractual relationship with the seller. The Court of Appeal in Nairobi Bottlers Limited v Ndung'u & another anchored the jurisprudence on consumer protection, placing the responsibility on suppliers to reduce the information gap between suppliers and consumers by setting out in their product labels the nutritional information of the product, storage directions and customer service contact information.

Consumer rights include but are not limited to:

  1. Rights to goods and services of reasonable quality;
  2. Right to the provision of information necessary for consumers to gain full benefit from goods and services;
  3. Right to the protection of consumers’ health, safety and economic interests; and 
  4. Right to compensation for loss or injury arising from defects in goods or services provided. 

Safety, Information, Choice, Voice, Redress 

The Competition Act was enacted to protect consumers from unfair and misleading market conduct and provides for the establishment of the Competition Authority and Competition Tribunal that have powers to enforce consumer protection measures and sanctions. However, there are areas of concern to be considered:

  • Efficiency in enforcement measures;
  • Governance framework of the Competition Authority; and 
  • Ability to handle emergency issues.

Artificial Intelligence and Consumer Protection

2023 was the epitome of generative artificial intelligence in the digital world impacting the workforce, creation and innovation, communication, information gathering and so on. AI may enhance consumer care and improve channels for redress but consumer safety and digital fairness must be of top priority.

How can generative AI negatively affect consumer rights?

  1. Distribution of harmful content - AI systems can create content automatically based on text prompts by humans which may be disruptive or harmful; 
  2. Copyright and legal exposure - Popular generative AI tools are trained on massive image and text databases from multiple sources, including the internet. When these tools create images or generate lines of code, the data's source could be unknown, which can be problematic for industries such as a bank handling financial transactions or pharmaceutical companies relying on a formula for a complex molecule in a drug. Reputational and financial risks could also be massive if one company's product is based on another company's intellectual property;
  3. Sensitive information disclosure - Generative AI is democratizing AI capabilities and making it more accessible. For example, a medical researcher may inadvertently disclose sensitive patient information or a consumer brand unwittingly exposing its product strategy to a third party. The consequences of unintended incidents like these could irrevocably breach patient or customer trust and make way for legal ramifications;
  4. Amplification of existing bias - Generative AI can potentially amplify existing biases, for example, bias can be found in data used for training large language models outside the control of companies that use these language models for specific applications. It's important for companies working on AI to have diverse project leads and subject matter experts to help identify unconscious bias in data and models.


What does responsible AI look like in its development, deployment and usage?

  1. Transparency - Developers need to be transparent about the data, algorithms, and models used in AI systems. This ensures that decisions made by AI can be explained and mistakes can be fixed;
  2. Fairness - AI must treat everyone fairly, regardless of their background. This helps prevent biased decisions or discrimination, promoting inclusivity and equality;
  3. Privacy protection - Protecting people's privacy is important when using AI. Organizations should handle personal data responsibly, following strict privacy regulations. Respecting privacy builds trust in AI systems; and 
  4. Accountability and explainability - Responsible AI requires mechanisms for holding systems accountable and explaining their decisions. Consumers should understand how AI systems work and have a way to address issues or biases.

What is the impact of responsible generative AI on consumers?

  1. Enhanced user experience - Responsible AI can provide personalized and intuitive experiences. This way, AI systems offer tailored recommendations and create seamless interactions for specific consumers;
  2. Consumer protection - Ethical AI practices protect consumers from harm or exploitation. Ensuring privacy, fair treatment and safeguards for consumers' rights and securing their personal data;
  3. Bias mitigation - Responsible AI works to reduce bias and discrimination. By addressing biases in algorithms and datasets, AI systems produce more fair outcomes, avoiding perpetuating inequalities based on any grounds; 
  4. Improved decision making - Responsible AI helps consumers make informed choices. AI-powered tools provide intelligent insights in areas like finance, healthcare, and education; and
  5. Trust and accountability - The responsible use of AI fosters trust between consumers and technology. When consumers trust AI systems, they are more likely to embrace and use them, contributing to further improvements.

Regarding data privacy, the Data Protection Act does not address the issue of collection and processing of data by AI systems and there are no guidelines on use of personal data by developers during the training of their algorithms. It is a bone of contention for user privacy as most people are unaware that their data is being collected, stored, and how AI systems use this collected data.

In conclusion, consumer rights are constitutionally protected, underpinning the importance of upholding ethical standards in AI innovation. It is therefore paramount to prioritize consumer rights, ensuring that technological advancements align with ethical standards to benefit society as a whole.

Published on Aug. 22, 2024, 1:10 p.m.