Johannesburg: TECHz – News Desk
As artificial intelligence reshapes the boundaries of creativity, governance, and identity, lawmakers across the globe have responded with a mix of caution, curiosity, and control. In Japan, humanoid robots are banned from certain public spaces – not for safety, but to preserve the dignity of human interaction. Saudi Arabia famously granted citizenship to a robot named Sophia, sparking global debate over rights without responsibilities. In China, AI-generated content must be clearly labeled, and deepfakes are criminalized unless approved by the government. These laws blur the line between protection and propaganda.
Italy once banned ChatGPT over privacy concerns, reinstating it only after stricter data transparency measures were introduced. In the European Union, the proposed AI Act categorizes AI systems by risk level, with “unacceptable risk” applications, like social scoring, facing outright bans. Meanwhile, in the United States, AI laws remain fragmented, with some states requiring disclosure when AI is used in hiring or customer service, while others have no regulation at all.
South Korea mandates that all AI decisions affecting humans must involve a human overseer, a symbolic gesture toward retaining moral agency. In India, draft laws propose that AI systems must not “hurt the sentiments” of any group – a vague but culturally loaded clause. Even naming conventions are policed: in some jurisdictions, AI-generated names for products or characters must not resemble real people, to avoid defamation or confusion.


