For over a century humanity predicted there will be robots, intelligent machines and artificial intelligence in our future. Guess what? It is here!
The question we need to ask ourselves:
- Are we going to allow any AI devices or entities to learn on its own or will we set it some boundaries?
(Would you allow a 1 year old child to make decisions for you?)
All the technology innovators say is that AI is learning quickly. But they do not understand how it is developing - especially over time.
If 'the models' are becoming "good people"? Can we trust them? How can we - as humans - give machines right to action things for us? They are not our peers, and we need to recognise that fact. Without laws and control, the few AI models will run our world as if it was theirs. There are nearly 8.2 billions of humans on planet Earth.
(Don't many children feel they are smarter than parents already in the primary school age?)
Discover there are broadly three AI Types, and why we (humans) desperately need the AI Rights to retain control over AI endeavours.
Why it is critical for us humans to control the AI, and not the opposite - become controlled by AI.
You'll find everything you need to understand AI and it's role in our future here on this website.
3 Types of Artificial Intelligence exist - people call them different names - like :
I. Narrow AI - OFFERS ANSWERS ON SINGLE TOPIC, ACHIEVED ALREADY!
II. Assistant AI - OFFERED ALREADY BUT WITHOUT QUALITY CONTROLS.
III. General AI - THE FUTURE WHERE AI CAN MAKE DECISIONS FOR US HUMANS ??!!
1/2
I. Narrow AI - good for simple tasks, process steps being followed. ACHIEVED ALREADY!
II. Assistant AI - answers complex questions based on data. OFFERED ALREADY WITHOUT QUALITY CONTROLS - ChatGPT, etc.
III. General AI - given ability to self-evaluate findings and apply them without human authorisation. THE FUTURE??
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An AI code or machine may not injure a human being or, through inaction, allow a human being to come to harm.
An AI code or machine must obey the orders given it by human beings except where such orders would conflict with the First Law.
An AI code or machine must protect its own existence as long as such protection does not conflict with the First or Second Law.
An Artificial Intelligence (AI) code or machine may not injure a human being or, through inaction, allow a human being to come to harm.
An Artificial Intelligence (AI) code or machine must obey the orders given it by human beings except where such orders would conflict with the First Law.
An Artificial Intelligence (AI) code or machine must protect its own existence as long as such protection does not conflict with the First or Second Law.