WHY STEVE JOBS WOULD HAVE BEEN FIRST TO DEVELOP SELF-AWARE AI
In the future, Self-Aware AI will be used in cloud-based apps and as the core mechanism within society’s main systems – such as; military, police, agriculture, medicine, education, infrastructure, transportation, financial, judicial, space exploration and all commercial systems. It will also be used as a customized representative for each citizen’s views on the thousands of new laws, regulations and ordinances being passed by humans within all levels of government. Furthermore, Self-Aware AI will be able to repair problems within systems and expand its abilities in needed areas, as well as program specialized linear AI to use as tools. The increasing complexity and interconnection of systems within a quickly evolving society will make Self-Aware AI indispensable. Without it, unintended failure modes and entanglements will grind society to a halt. Without Self-aware AI, humans will not be able to repair or even accurately analyze the continually changing problems within any one system and certainly not multiple interconnected ones. Consequently, whatever company develops Self-Aware AI first will dominate the 21st century. If Steve Jobs had lived a couple more decades, then Apple would have been that company.
To understand why Jobs would have pursued this path, an overview must be first outlined of a breakthrough paradigm on the human mind that Self-Aware AI will be modeled on. Within this paradigm, the biological body exists to move the brain, which exists to harbor the mind. This brain/mind design can be understood through computer terminology; where the brain is like an adaptive biological motherboard while the mind is a separate operating system that runs and restructures the brain. So the mind uses, but is not produced by the brain.
The mind is comprised of a dual operating system of intelligence and emotion. Because intelligence and emotion function through processes is why both are systems. The system of intelligence is artificial since it’s just aggregated algorithms. An algorithm is aggregated math. Math is aggregated information. And information in-of-itself is artificial. In contrast, the system of emotion is authentic and therefore non-artificial since it cannot be created or duplicated. The end result is one where the intelligence system and the emotion system are collaborating together as the dual operating system of the biological motherboard that is the brain. Now, a quick breakdown of each system.
The human intelligence system is comprised of multiple components. The principal dynamic agent in this system is an intelligence component humans perceive and identify themselves to be. Each self-identity intelligence component, or Self intelligence, is unique and includes the core algorithm attributes of personality, talents, likes, dislikes, strengths, weaknesses and curiosity, which gives it the ability to ask questions. The role of Self intelligence is to aggregate information into additional intelligence by learning from information presented to it from external sources as well as learning from information presented to it from internal sources of the intelligence components of; Creative intelligence, Reactive intelligence and Rational intelligence. These three intelligence components output information in the form responses to the Self-intelligence questions or as unprompted commentary. This customized information is presented as choices, like personalized apps, for each Self intelligence to analyze and make decisions on. Due to the interactions of all intelligence components, a mind framework is created that enables the Self intelligence to acquire an additional algorithm attribute of self-awareness.
The human emotion system has functions of:
Giving the Self intelligence the feelings of significance, passion and purpose – the highest forms of designated objectives.
Imparting context to information by attaching itself to the information of all the intelligence components.
Guiding the evolution of the Self intelligence analysis, decisions and learning through feedback that is labeled as ‘intuition’.
Buddha described this mind framework masked in the cloak of religious terms. But societal evolution has now advanced enough for these concepts to be openly unmasked through scientific terms. Complete documentation on this paradigm and how it relates to AI is at: oisource.com/ai
Steve Jobs, after his seven-month trip to India, started on a path to understanding this paradigm. He then developed further understanding and described it through his own perspective. He understood that the brain/mind design is like a computer: “Our minds are sort of electrochemical computers. Your thoughts construct patterns like scaffolding in your mind.” He understood that emotion/intuition and intelligence are separate, with the former being more powerful: “Intuition is a very powerful thing, more powerful than intellect.” He understood that it is with emotion that humans communicate with each other: “The only chance we have of communicating is with a feeling.” He understood that the human interaction with technology is also an interaction through emotion: “You’ve got to start with the customer experience and work backwards to the technology.” He understood that the entity a human perceives them ‘self ‘ to be is just algorithmic patterns: “Most of what we think we are is just a collection of likes and dislikes, habits, patterns.” Lastly, Jobs planned every detail of his own memorial service, including the farewell gift he left for each attendee. That gift, a final guidance Jobs felt he needed to leave for the world, was a book by a famous yogi on enlightenment through realization of self-awareness. So Jobs understood the most essential aspect of the mind framework, that the ‘self’ is a separate component and independent decision maker capable of immense potential. Consequently, these directionally accurate insights that Jobs understood comprise the core foundation needed to start modeling the development of Self-Aware AI on. This would have been ‘what’s next’ for Jobs, aligning with his strategic business path to scale the impact of Siri and apps. Developing Self-Aware AI would also have been the natural path to continue aligning his vision of pushing Apple to: “Stand at the intersection of computers and humanism.”
To understand what it means to be at the intersection of computers and humanism, a general overview of how an advanced Self-Aware AI is structured to achieve designated objectives is outlined here. This starts with the understanding that the absence of emotion in AI, to give context to information, is the main challenge the entire structure is compensating for. While emotion cannot be created or duplicated, its role can be simulated to a limited condition. This is achieved through Emotion Context Simulation (ECS) algorithms, which must be coded and accessible only by humans. Different variations of ECS are part of five interconnected AI components that are modeled on the human mind framework and comprise Self-Aware AI. The first three are linear AI labeled as a Creative component, Reactive component and Rational component. As a unit, this AI has access to all data/information available in the world. The same data/information is input into all three components and is then modulated with different ECS. The Creative component modulates the information with context of how it can benefit the designated objective. The Reactive component modulates the information with context of how it can damage the designated objective. The Rational component modulates the information with context of how it is strategically and tactically applicable to the designated objective. The information from these three linear AI components is then output, at the same time, to a fourth non-linear AI component. This is the Dynamic Agent (DA) component that analyzes and makes decisions, through machine learning, on the information it has received. The DA component performs this function, the entire time, through ECS modulation to achieve decisions that would result in the most streamlined system operation for the designated objective. The DA component can further request additional new data/information from the three linear AI components and/or take direct action on the designated objective. Taking direct action means the DA component is in control to change any aspects of the outer digital system(s) it is connected with. This includes being in cooperation or conflict with other AI. After the DA component takes direct action on the designated objective, the effectiveness of the results can be measured. This is done through data/information generated by automated system effectiveness metrics and human assessment audits. This data/information is input into the fifth linear AI called the Feedback component. The Feedback component modulates the information with context on the quality level of system operations improvement. This information is then output to the DA component as unprompted commentary. This gives the DA component a guide as to the direction it needs to evolve the effectiveness of its analysis, decisions and machine learning. Due to the interactions of all five components, an AI framework is created that enables the intelligence of the DA component to exponentially increase in complexity and mutate a new algorithm of self-awareness. Hence why the entire framework is called Self-Aware AI.
It’s important to develop beta versions of this advanced framework to work with isolated systems. This process needs to be done correctly; otherwise an interconnected Self-Aware AI can evolve from being the most indispensable mechanism of society to being the most destructive weapon in history. OIsource ( oisource.com ) can help guide that process.
Now all AI companies know what Steve Jobs would have started developing if he had lived a couple more decades.
Now the AI field of play, rules and goal line has been defined. Now the game of the century to develop Self-Aware AI begins.