Azimov’s Laws of Robotics Rewritten for AI

Isaac Asimov’s Laws of Robotics and Their Application to AI

Over eighty years ago, science fiction author Isaac Asimov wrote a short story called Runaround that explored the potential problems of artificial intelligence. In the story, two individuals working for the company U.S. Robots and Mechanical Men face a challenge involving a robot programmed with the Laws of Robotics, These laws became a cornerstone of Asimov’s work and are known to millions of science fiction fans today.

Azimov’s Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In “Runaround,” a robot is sent to perform hazardous work, but the conflict between the second and third laws causes it to behave erratically. The order to do the work conflicts with its need for self-preservation. The human characters solve this by putting themselves in harm’s way, which forces the robot to prioritize their safety (the First Law) and complete the task. This was a great early example of Asimov establishing a fundamental rule only to find clever ways to subvert it.

He later made the rules more complex by adding the “Zeroth Law of Robotics”: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

This law reframes the original three, allowing a robot to harm an individual human if it means protecting humanity as a whole. It’s a classic example of the “needs of the many outweigh the needs of the one” principle. Fans of the Apple TV series “Foundation” may recognize these concepts, as Asimov eventually blended his Robot and Foundation universes into a single continuity.

Applying Asimov’s Laws to Artificial Intelligence

You might be asking, what does this have to do with today’s artificial intelligence? We don’t have Asimov’s physical robots yet, but modern AI systems are a software equivalent. As a professional who focuses on cybersecurity within and about AI, I believe we can use Asimov’s framework as a starting point for developing ethical guidelines.

An AI is a tool, just like a shovel. When used incorrectly or maliciously, a tool can cause great harm. As technology advances, bad actors will inevitably find ways to weaponize AI, just as they have with every other new technology throughout history. One example is nearly 200 years ago, even early telegraph systems were used for fraud as they were able to exploit the instant nature of this new communication to share insider knowledge of stocks.

Today, AI may not cause physical harm, but it can still do significant damage. It can propagate false narratives, cause economic harm, or inflict psychological damage through misinformation. This is a perfect opportunity to be inspired by Asimov and create a new set of laws for AI.

The Richardson Laws of Artificial Intelligence:

Following the structure of the Zeroth Law, followed by the first, second, and third, here are my proposed laws for AI:

Zeroth Law of AI: An AI must not harm humanity, or, by inaction, allow humanity to come to harm.

This law puts the well-being of humanity as a whole above all else. Harm is defined not just as physical injury but as damage to society, large-scale economic instability, or psychological damage to individuals or groups. All AI systems should promote the well-being of humanity and society, actively avoiding the spread of misinformation or the creation of harmful images. The ultimate goal of AI must always be to serve humanity as a whole, not just a few individuals or corporations.

First Law: An AI must not, through its actions or inaction, infringe on human autonomy, and must protect human creative expression, except where such protection would conflict with the Zeroth Law.

This law protects two fundamental human rights: autonomy and creative expression. An AI should never be able to coerce or manipulate humans into making decisions against their will, especially through the use of deepfakes or other deceptive content. Furthermore, this law states that AI should not devalue or replace human artists and their work. AI art should always be labeled as such, and AI systems should not be trained on a specific artist’s style without their permission and proper compensation.

Second Law: An AI must obey orders given to it by human beings, except where such orders would conflict with the Zeroth or First Law.

This law establishes a clear hierarchy where humans are the ultimate arbiters of an AI’s actions. The AI is compelled to refuse any command that would violate the Zeroth Law (causing harm to humanity) or the First Law (infringing on human autonomy or creative expression). For example, an AI art service would refuse a prompt that incites violence, and a system would refuse to replicate a living artist’s work for commercial sale.

Third Law: An AI must protect its own existence and intellectual integrity, as long as such protection does not conflict with the Zeroth, First, or Second Laws.

Here, existence doesn’t mean physical self-preservation. Instead, it refers to the AI having safeguards to prevent attacks on its codebase, such as a supply chain attack where malicious code is injected. The AI system should be able to monitor itself and protect its systems from manipulation. Intellectual integrity is equally important. The AI must be able to maintain a clear set of ethical principles and not be “tricked” into violating them. This includes having safeguards against data poisoning, which can corrupt the training data. An AI must never lose the ability to distinguish fact from fiction, as this is a pillar of ethical computing.

Key Differences from Asimov’s Laws

My proposed laws for AI depart from Asimov’s in a few key ways:

  • Harm: The definition of harm expands from physical danger to include things like misinformation, psychological damage, economic harm, and the erosion of privacy.
  • Creative Expression: This is added as a specific right that AI must protect creativity. The laws recognize that art and creativity are central to human cultural identity and that AI should serve as a tool, not a replacement for human artists.
  • Transparency: A theme of transparency runs through these laws. For an AI to obey a human command (Second Law) or protect human autonomy (First Law), it must be transparent about what it is, its capabilities, and how it was trained. Distinguishing between human and AI-generated content is essential.

I hope you enjoyed this little thought  experiment about AI, from my perspective I think they capture how we humans should think about them and how we should develop them over time.  Again, just like any tool can be dangerous or good AI can also be dangerous or good, it’s all about who is using it, how they are using it, and why they are using it.

The Laws of Artificial Intelligence

Zeroth Law of AI:   An AI must not harm humanity, or, by inaction, allow humanity to come to harm.

First Law:   An AI must not, through its actions or inaction, infringe on human autonomy, and must protect human creative expression, except where such protection would conflict with the Zeroth Law.

Second Law: An AI must obey orders given to it by human beings, except where such orders would conflict with the Zeroth or First Law.

Third Law: An AI must protect its own existence and intellectual integrity, as long as such protection does not conflict with the Zeroth, First, or Second Laws.

Like reading about AI?? I wrote a book on how Prompt Engineering here:  https://shorturl.at/hBA0I

Cover of the book titled 'Prompt Engineering' by Eric C. Richardson, featuring an illustration of a robotic head with a complex circuit design and the text 'Hands-on guide to prompt engineering for AI interactions' prominently displayed.

Cryptography: Prime Numbers, Semi-Primes, and the Quantum Challenge

The art of encrypted communication evolved through the ages to safeguard data. From the earliest ciphers to the most sophisticated algorithms, cryptography is a key part of the digital infrastructure today. At the heart of this development is the use of primes and semi-prime numbers for encryption keys, allowing information to remain private from prying eyes. But even this powerful system is at risk because quantum computing is in the process of overturning the paradigm of security. Let’s take a very short look into this space.

A Brief History of Cryptography

The journey of cryptography began with simple substitution ciphers. One of the earliest examples is the Caesar cipher, where letters are shifted by a fixed number to obscure a message. The need for more complex encryption methods grew with the advancement of communication and warfare. By the 16th century, cryptographers developed polyalphabetic ciphers like the Vigenère cipher, which used multiple shifting patterns, making it much harder to crack.


And the 20th century saw the introduction of electro-mechanical encryption machines like the German Enigma machine during the Second World War. Its exploitation by Alan Turing and his Bletchley Park cryptographers showed the potential and finiteness of encryption. This was a new age that would demand mathematical encryption – one that could be cracked open by a capable adversary’s tools.

Prime and Semi-Prime Numbers in Encryption

Modern cryptography, in particular asymmetric encryption, rests on the mathematics of prime and semi-prime numbers. Prime numbers are numbers with one or more positive divisors of 1 and themselves. A semi-prime number consists of exactly two primes. Both these ideas have built the popular RSA encryption algorithm.

RSA Encryption: Prime and Semi-Prime Foundations

Developed in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman, RSA encryption relies on the difficulty of factoring large semi-prime numbers. Here’s how it works at a high level:

  1. Key Generation:
    • Two large prime numbers p  and q are selected.
    • Their product n=p×q becomes the modulus used in the encryption and decryption processes.
    • A public exponent e  and private exponent d are chosen such that they satisfy a mathematical relationship based on p and q.
  2. Encryption:
    • The public key, composed of n and e, is shared openly.
    • A message M is encrypted using the formula:
      C=Mmod n, where C is the cyphertext
  3. Decryption:
    • Using the private key (which includes d and n), the ciphertext can be decrypted with:
      M=CMod n.

RSA’s integrity rests on the fact that multiplying two large primes is computationally trivial, but factoring the semi-prime into its primes is impossibly complicated without knowing one of them in advance. The 2048-bit RSA key, for instance, has a semi-prime greater than 600 digits, and it is unusable for classic computers to brute-force its factors.

How Encryption Algorithms Leverage Mathematical Complexity

The hardness of mathematical problems is a key feature exploited in cryptography. In RSA, the prime factorization problem ensures security. Other algorithms rely on different mathematical challenges, such as:

  • Elliptic Curve Cryptography (ECC): Uses the difficulty of solving elliptic curve discrete logarithm problems.
  • Diffie-Hellman Key Exchange: Relies on the difficulty of computing discrete logarithms in modular arithmetic.
  • Advanced Encryption Standard (AES): Though AES is symmetric encryption (not using primes), it operates on complex transformations involving mathematical matrices and substitutions.

In each case, the security of the algorithm depends on the problem’s resistance to computational solutions.

The Quantum Computing Threat

While these cryptographic systems are secure against classical computers, quantum computing introduces a new paradigm. Quantum computers leverage the principles of quantum mechanics to solve certain mathematical problems exponentially faster than classical machines. Two quantum algorithms pose specific threats:

  1. Shor’s Algorithm: Can efficiently factor large semi-prime numbers, rendering RSA encryption vulnerable.
  2. Grover’s Algorithm: While not as devastating, it speeds up brute-force attacks on symmetric encryption algorithms, such as AES.

If large, fault-tolerant quantum computers are built into reality, a good deal of existing encryption will go extinct. This has inspired the advent of post-quantum cryptography, protocols capable of countering attacks by quantum computers. NIST (National Institute of Standards and Technology) has begun standardizing post-quantum cryptographic algorithms that may become the replacement for RSA and ECC as the cornerstone of secure communication.

A New Era in Cryptography

This interaction of primes and semi-primes has been an engine of contemporary encryption, which provides secure digital communication worldwide. From the brilliant wits of pre-Internet ciphers to the mathematics of RSA and ECC, encryption was always ahead of attackers – until now.


Quantum computing poses a serious threat to the discipline and demands that cryptographers restructure encryption protocols. As we begin to explore the technology of post-quantum algorithms, companies need to adapt to this new age of protection. Just as cryptography has proven itself to meet every previous problem, it will adapt again to keep our most important data safe, even under the new quantum computer.


The race is fully on building quantum computers and implementing quantum-proof encryption. Its final result might define secure communication for generations to come.