AI Digest
MARCH 2025


'Policies, Rules, and Guidelines for Humanoid Robots', authored by Safe AI Foundation
The ability of robots has advanced by leaps and bounds. Robots can now see better, hear better, speak well, recognize people and things, and move around like a human! This astonishing progress made in the last 10 years is phenomena. The major sources of robots R&D have been the USA, UK and CHINA. Robots can exist in many different forms and shapes and do not necessarily need to resemble a human (humanoid). All robots have specified purposes and goals.
There are many different kinds of robots and in this digest, we will not dwell too much into its technology or construction. We will focus on civilian robots only. Defense and military robots are outside the scope of our foundation. Readers are suggested to follow up further from the list of references provided at the end of this digest.
1. To start, we ask the critical question - what sort of policies, guidelines, rules and regulations are appropriate and needed for robots?
Several industrial safety standards for robots exist today, such as ISO 10218 and IEC 61508. ISO 10218 is a foundational standard for industrial robots, providing safety requirements for their design, integration and use. The IEC 61508 concerns functional safety of robots. However, these standards are insufficient to address the intelligent robots of today that are now enabled by AI.
Robots are made of metallic and electrical parts, with servo motors to move its body parts (head, legs, hands, etc.). Such metallic parts, if move at fast speed and great force, can hurt humans or animals. In the early days, the LAWS OF ROBOTICS (LOR) have been put in place.
The three Laws of Robotics (LOR) are:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The above laws apply to the civil world and does not include defense. In fact, in the civil world, the FIRST LAW should have “may” replaced with “must”. No civil robots should be allowed to cause harm to humans. Hence, this should be recognized by government policies worldwide and also incorporated into robot product manufacturing requirement.
Secondly, the SECOND LAW rightfully mentioned robots must obey human orders and put man before self. The second law has embedded in it a protection mechanism to prevent someone evil to command robots to hurt other humans. Hence, it is correct that human orders that conflict with the first law be excluded from obedience by the robots. However, the second law does not impose what if a human commands a robot to hurt another robot or itself. This is the missing and debating part.
The SAFE AI Foundation’s view is that a human should not command a civil robot to hurt another civil robot or itself. Although a robot-to-robot identity recognition mechanism may or may not be in place, a civil robot should not (for any reason) attack or cause hurt to another robot, either self-directed or under a human’s command. This can avoid the scenario of robots fighting or killing each other. In fact, robots should also disobey commands to hurt animals, fulfilling existing animal protection rights and laws. A robot should also not hurt itself under the command of a human. The owner of a robot, however, can command it to power down or enter sleep mode.
As for the THIRD LAW, this is again questionable and open for debates. Self-preservation of existence of AI has been viewed negatively by many, including Geoffrey Hinton. The worries here are an AI-driven robot, in order to preserve its own existence, may do things out of the ordinary, including unethical and unlawful things, such as cheating, fraud, misguide, etc., in order to prevent it from being: (a) reprogrammed, and/or (b) shut down.
Although the third law stated that its self-preservation should include obeying FIRST and SECOND LAWs, this does not mean unethical and unlawful actions are prohibited. This is, therefore, a loop hole of the THIRD LAW. Google's Germini has incorporated LOR into its "Robot Constitution" (see reference). Much like "We the people..." in the USA constitution, soon we will have "We the robots..." but the robot's constitution will be written by humans, not the robots themselves. Humans govern a robot's destiny and not vice-versa.
Much like "We the people..." in the USA constitution, soon we will have "We the robots....".
Hence, the SAFE AI Foundation’s view is that the THIRD LAW should be put on hold, for further consideration, for debates and for revision. The THREE LAWS OF ROBOTICS were made much earlier before the arrival of new AI capabilities that we have today. Today, AI is much clever than yesterday’s AI. Robots driven by AI today are much clever and capable. Hence, a revision of LOR is needed. The other laws concern self-awareness, i.e., a robot must know it is a robot, and protecting humanity. These 4th and 5th laws are again open for debates.
Policies for robots should include the laws of robotics. Robotic industries must confirm that their robotic products do not hurt humans and animals and are SAFE TO USE. Although accidents can happen, manufacturers are usually liable for any injuries caused to consumers. The FINAL LAWS OF AI-DRIVEN ROBOTS have yet to be established and universally agreed upon but the quicker this happens, the better. Robots meeting this requirement can have a LOGO or LABEL printed on its body, similar to what UL and CE labels for computer products.
"The Final Laws of AI-Driven Robots have yet to be established and universally agreed upon but the quicker this happens, the better".
Other requirements such as energy usage, weight (a heavy robot can cause hurt to child if fallen), materials safety (toxic materials are forbidden), etc., should be included in a checklist, under Robots Quality Control (RQC). Manufacturers should provide a label of “Ingredients”, outlining these in a table, much similar to a product or food label outlining sugar content, salt, calories, etc. This RQC can be printed onto the robot body itself (with a globally unique serial number, as all robots must be accounted for) and/or in the user manual. Ideally, this unique serial number should include country of origin, city of origin, date of manufacture, company ID, company location (XYZ), product ID, etc.
Currently, a globally accepted "robot serial number system" (RSNS) has not yet been established. Accountability for robots is important. As robots scale into millions and should robots go missing or out of control, their identities and location can be tracked. A robot is an asset and as an asset, it should be treated with proper identity and ownership. Ideally, each robot owner (the manufacturer selling the robots) should have accountability and overall control and management of these robots. The consumer that bought the robot is the second owner, and the second owner is a user, and hence has fewer rights and control over the robot. Robots should be serviced annually or regularly to maintain the specified performance and safety. Servicing of robots should be done by the manufacturer, not consumers. A state or federal government entity or agency or department can be established to "oversee" the registration of these robots (much like car registration here in the USA), providing accountability, ownership records, and quality records. Defective robotics that failed quality checks are not safe for use and should be repaired or decommissioned. This is a good way to scale to millions without losing control and oversight of robots.
"A robot is an asset and as an asset, it should be treated with proper identity and ownership".
" Each robot owner (the manufacturer selling the robots) should have accountability and overall control and management of these robots" .
"A state or federal government entity or agency or department can be established to "oversee" the registration of these robots (much like car registration here in the USA), providing accountability, ownership records, and quality records".
Human-like robots are being rolled out in large quantities as we speak and some are already on sale in the market today. Very quickly, the world's robot population will scale into millions, if not billions. Ultimately, a limit will have to be imposed on the population of robots, as ideally, humanoid robots should not exceed the world's human population. Maintaining 1/3 robot-to-human population ratio should lower fears of excessive robots presence. Hence, it is important to have policies governing humanoid robots now.
"Ultimately, a limit will have to be imposed on the population of robots, as ideally, humanoid robots should not exceed the world's human population".
2. Final Remarks
The LOR and RQC are two prime requirements for the robot industries and they represent some sort of safety assurance to the consumer and public. Although currently not being rolled out yet in America, the SAFE AI FOUNDATION feels that it is necessary to consider them now. Ultimately, the RSNS is needed to ensure proper identification, tracking and accountability of robots. Robots should be treated as an asset and be given an unique identity. Owners of robots should have accountability. control and management rights of these robots. To scale to millions, a government unit can be established to handle the registration of civilian robots, keeping records of ownership, status of quality and safety checks.
~ ~ ~ end ~ ~ ~
REFERENCES ON ROBOT COMPANIES
1. USA - TESLA - OPTIMUS ( https://www.tesla.com/en_eu/AI )
2. USA - BOSTON DYNAMICS – ATLAS ( https://bostondynamics.com/atlas/ )
3. USA - APPTRONIK – APPOLLO ( https://apptronik.com/apollo )
4. UK – ENGINEERED ARTS - AMECA ( https://engineeredarts.com/robot/ameca/ )
5. UK – HUMANOID ( https://thehumanoid.ai/ )
6. CHINA – UNITREE ( https://www.unitree.com/ )
7. CHINA – ENGINEAI ( https://www.youtube.com/watch?v=j-uMnH_f7cU )
8. GERMANY – NEURA ( https://neura-robotics.com/products/4ne-1 )
9. GOOGLE Deepmind – Robot Constitution ( https://arxiv.org/pdf/2503.08663 )
Disclaimer: The information in this digest is provided “as it is”, by the SAFE AI FOUNDATION, USA. The use of the information provided here is subject to the user’s own risk, accountability, and responsibility. The SAFE AI FOUNDATION is not responsible for the use of the information by the user or reader.
Note: The SAFE AI Foundation is a non-profit organization registered in the State of California and it welcomes inputs and feedback from readers and the public. If you have things to add concerning policies for humanoid robots or would like to volunteer, please email us at: contact@safeaifoundation.com