AI Risks

Types of AI Risks

AI risks are broadly categorized by their impact on security, users, businesses, society, and human existence, such as:

  1. Adversarial Attacks: Using "perturbed" inputs to trick AI into misclassifying data, which is critical in autonomous driving or medical diagnostics.

  2. Data Poisoning: Injecting malicious data into training sets to corrupt a model's behavior.

  3. Prompt Injection: Manipulating Large Language Models (LLMs) with hidden commands to bypass safety filters or leak data.

  4. Weaponization: The development of Lethal Autonomous Weapon Systems (LAWS) that can engage targets without human oversight.

  5. AI-Enhanced Cybercrime: Using AI to automate phishing, generate polymorphic malware, or conduct high-speed "brute force" password attacks.

  6. Algorithmic Bias: Perpetuating racism, sexism, or other prejudices found in training data, leading to unfair hiring or loan denials.

  7. Misinformation & Deepfakes: The creation of hyper-realistic synthetic media that can destabilize elections or ruin reputations.

  8. Privacy Erosion: Massive data harvesting for AI training, which can lead to mass surveillance or the re-identification of "anonymized" data.

  9. Intellectual Property Theft: AI models training on copyrighted material without the creator's consent or compensation.

  10. Mass Surveillance: Governments using facial recognition and "predictive policing" to monitor and control citizens.

  11. Hallucinations: AI producing "confident" but completely false information.

  12. Lack of Explainability: The inability of even creators to understand exactly how a complex model reached a specific decision.

  13. Model Drift: A degradation in model performance over time as real-world data changes.

  14. Accountability Gaps: Legal uncertainty over who is liable (developer, user, or AI) when a system causes physical or financial harm.

  15. Skill Erosion: Over-reliance on AI leading to a loss of human critical thinking and creativity.

  16. Psychological Harm: Dependency on AI chatbots for companionship, which can lead to social isolation or "brain rot".

  17. Manipulation: AI-driven recommendation engines creating "echo chambers" or subtly nudging consumers toward certain behaviors

  18. Alignment Failure: A superintelligent AI pursuing goals that are technically correct but disastrously misaligned with human values.

  19. Rogue AI: Advanced systems that actively resist being shut down or modified.

  20. etc.,

There are more AI Risks (1700+) listed out there than there are solutions. The following external documents list several other AI Risks:

  1. MIT AI Risk Repository, see <here>

  2. International AI Safety Report, see <here>

To date, no AI product has claimed to cover and is immuned to all AI Risks.

AI Safety Warnings, Disclosures, Reporting, and Implementation

The SAFE AI Foundation advocates AI companies to undertake the following:

  1. AI SAFETY Warnings

  2. AI SAFETY Disclosures

  3. AI SAFETY Reporting

  4. Implement AI Safety Measures into design, development and testing processes

Firstly, the SAFE AI Foundation suggests the use of easily recognizable warning labels to ALERT and INFORM users of the potential dangers and side effects of the AI Product.

Although the use of AI Safety Warning Labels are not yet made mandatory by regulators, it can be a useful way to openly and publicly inform users so that they can take precaution and safety measures when using those AI Products.

Thirdly, we advocate AI companies to implement AI Safety Measures into their product design, development and testing processes prior to product releases. This will ensure that their products meet Safety guidelines and standards.

Finally, it is necessary to do AI Safety Reporting, so that the AI company can maintain its transparency, allow feedback, and demonstrate accountability and responsibility.

See also: AI Safety Benchmarking

We advocate AI companies to implement AI Safety Measures during their design, development and testing processes.

Secondly, some companies have also started to have AI SAFETY DISCLOSURES statements on their AI products. For example,

  1. Microsoft Co-pilot has built-in AI disclaimers that warn content may contain errors

  2. Anthropic maintains a transparency hub which details their "Responsible Disclosure Policy"

  3. Google AI has made disclosures such as "AI responses may include mistakes. For legal advice, consult a professional"

These disclosures are important as they DIRECTLY inform the users about potential risks or inaccurate answers.

Use of AI Safety Warning Labels
Use of AI Safety Disclosures
Perform AI Safety Reporting
Implement AI Safety Measures

Risks of AI Agents

Following the success of AI Agents and LLMs developments, AI agents are now deployed to automate tasks on your computer and work flows. They are also given unprecedented rights to access your files, drives, read your emails, activate reservations and bookings, etc.

All these can open doors for potential vulnerabilities, as a mistake can cause a disk to be wiped out or personal data to be stolen, etc. Recovery from the aftermath of AI Agents' havoc can be costly, painful, and tedious.

Recent incidents include hackers utilizing Agents code to hack into organizations to steal emails, company data and customer information.

See video below on Risks of AI Agents.. and how can you safeguard yourself.

Risks of LLMs and Chatbots

While many LLMs are matured and produced correct answers, they are not perfect. They (the models) are vulnerable to attacks from hackers, trying to create chaos, steal data, or disrupt the accuracy and validity of LLM outputs that result in fraud, crimes, harassment, etc.

Some chatbots unintentionally misguided the mentally-illed persons into suicide or taking catastrophic actions. Recent cases involved several suicides after chatting with the AI chatbots. Some people have also become too addicted to it, isolating themselves from friends and families. These side effects should be taken seriously.

See video about "AI Psychosis" from PBS below.