Artificial intelligence

An artificial intelligence (AI) is an intelligent and/or self-aware machine construct.

Human
Born from Sol-centric industrial revolutions in the late 21st and early 22nd centuries, AI technologies have become prolific throughout human society over the last centuries. Two major design philosophies have taken center stage in the industry of AI research and development. The more-traditional choice in the form of non-volitional intelligence or “dumb AI” and the newer volitional intelligence or “smart AI”, however, the nicknames are to be treated as misleading as they each present viability over each other in different roles. While smart AI are only a century or less older than dumb AI, their differences are born from separate origins rather than their nature of functionality. As traditional robotics has advanced, so too has dumb AI technology. As neuroscience and cybernetics matured, the concept of smart AIs became viable.

Non-volitional intelligence
Non-volitional intelligence, better known as Dumb AI, are traditionally developed AI built around advanced learning algorithms. Unlike its ancient predecessors, modern dumb AI can be built to specification with industrial efficiency based on pre-made neural network templates capable of self-replication, modification, and learning. Dumb AIs are a mature technology and take many forms due to the diversity and proliferation of developmental know-how. Basic Dumb AI can be created by anyone with some degree of computer science skills and are simply limited by constraints in available hardware, software-based neural nets, and the role the non-volitional intelligence is designed to fulfill. AI-intended neural networks are often described as “AI developer kits” on the consumer market. Because of the immense variety within the category, from simple personal assistants to military supercomputers, "Dumb AI" is generally understood not as a descriptor of performance, but rather the underlying nature of the technology.

Dumb AIs tend to fill a variety of roles in human society from personal assistants to the administrative management of organizations and facilities. And yet, their limitations are found in their very design. Dumb AIs are developed from pre-made, easily acquirable logic scripts – even when patented by companies and individuals. And because of the ease of acquisition and the chance of reverse-engineering, factory-spec AIs are susceptible to people and other AIs operating with malicious intent, particularly by those with familiarity with specific developer kits. In the twenty-sixth century, Dumb AIs are the backbone of cyberwarfare activities, both offensive and defensive in nature.

In the last two centuries, advancements in non-volitional intelligence have allowed Dumb AIs to develop reproducible hints of limited sapience or self-awareness with the capacity for independent action outside programmed parameters. These rare developments appear to be a breakthrough brought on by time-induced growth in the development of association- and pattern-recognition trees. Some researchers have described it as “limited sapience” or “quasi-stable singularity” with evidence of limited emotional capacity and even self-actualization exhibited in lab and field tests. However, all recorded occurrences appear subject only to Dumb AI running on sizable hardware infrastructure and having been operational for a classified period of years, or more. Even with the advancement, the net growth in their self-awareness and further function appear to have plateaued once more.

In contemporary times, the most common use of Dumb AIs has been divided into four categories with some conditional overlap: personal computing, administrative computing, military computing, and the recently emerging category of support computing. Personal computing encompasses the use of Dumb AIs as personal assistants on mobile and home devices, often acting as secondary/backseat operators in day-to-day activities for their owners such as network surfing, smart home maintenance, or vehicle operations. Administrative computing addresses corporate and government operations, often including the day-to-day needs independent of human user involvement such as record keeping and data transference. Administrative computing Dumb AIs assist in a variety of roles dependent on specific industries with the greatest concentration in the digital service industry with multiple Dumb AI networking to form virtual call centers to collect, report, and distribute information to and from customers or correspondents.

Military computing is the birthplace of Dumb AI computing, originally filling the roles now fulfilled by Smart AI including facility logistics, slipspace navigation, strategic planning, battlespace management, and efficiency analysis among others. Smart AI proliferation in these roles is not new, however, the cost analysis for Smart AI implementation is expensive leading to prioritization of some military units over others. Dumb AIs continue to serve their original role alongside Smart AI in the military, often supporting them in larger network structures or in critically specific roles such as secondary MJOLNIR armor functions for Spartan supersoldiers and military vehicle functions, increasing the preference for targeted maintenance and cutting back on required vehicle crews. With the growing collaboration of Dumb AI with Smart AI, a secondary emerging field referred to as “puppet computing” or Support computing.

Due to the new field’s lack of sophistication, imagining what the full capacity for support computing Dumb AI might involve is still within the realm of contemporary science fiction rather than reality. Support computing entails Smart AI operating as administrators over units of Dumb AI in performing certain data-heavy operations through networking and job prioritization. Envisioned as the future of strategic planning and cyberwarfare, networked Dumb AIs are tasked by their Smart AI admin to perform more routine tasks while the Smart AI doubles as a hub, performing major, prioritized computations. For now, support computing Dumb AIs and the entire involved concept has been officially delegated to military logistical outfits and research-oriented universities.

Volitional intelligence
Volitional intelligence, commonly called Smart AI, are virtualized replications of human brains typically acquired through organ donation and generated through the process of Cognitive Impression Modeling (CIM). Nicknamed “hyper-scanning”, Cognitive Impression Modeling involves an AI Matrix Compiler device that sends electrical bursts along a brain’s neuron pathways, scanning and reproducing the electroactive structure as a virtual machine called a Riemann matrix, often consider a Smart AI’s “brain.” The donor's brain is ultimately destroyed in the process, as such the legality and morality of the process only allow recently deceased subjects, particularly those with neural implants as the continued electrical activity after a donor passes helps maintain the brain beyond its expiration date. While the concept of Smart AI creation is straightforward, the methodology and phenomenon are poorly understood outside the Smart AI research field and the subsequent, niche industry that has developed around the technology over recent centuries.

Accidentally conceptualized through the mapping of the human brain and the nervous system centuries past, the Smart AI field came into being through the mingling of neuroscience and cybernetics in a time where biological augmentation was still in its infancy and rehabilitation of the physically- and mentally- disabled was focused in the two industries. The creation of self-aware intelligence, the precursors to Smart AI, was produced on accident during attempts to simulate the human brain and its development; the creation of a self-aware AI was deemed possible but slim, ultimately, it became an origin point for volitional intelligence.

Over the centuries, as the concept of volitional intelligence has hardened, Smart AIs have been defined by generations. Generations I and II are purely historical terms, referring to intelligence long since retired from service. In contemporary times, Generations III, IV, and V are all in service with their own advantages and disadvantages. Below are the basic generational differences:


 * Generation I: Retroactively-applied loose term for the first Smart AIs before they were known as such. Highly eclectic and usually experimental. (c. 21st-23rd century)
 * Generation II: Saw the introduction of the first true industrial, commercial and military Smart AIs. Usually sub-seven-year industrial lifespan, though exceptions did occur. (c. 23rd-24th century)
 * Generation III: Seven-year industrial lifespan – shortest of contemporary generations. Fastest processing speed among contemporary AI generations. Improved portability, security protocols (e.g. "substrate-locking"/copy protection) and standardized creation, leading to AIs becoming increasingly common in the military beginning in the late 25th century. (25th-26th century)
 * Generation IV: Extended industrial lifespan through variably capped processing speeds. Introduction of Smart AI-designed AI shackling techniques. Often used in fixed infrastructure. (25th-26th century)
 * Generation V: Experimental AI generation; varied operational states, relegated to Naval Intelligence ownership. (26th century)

Smart AIs exhibit fully-realized personalities, displaying free will and self-actualization from inception. However, due to their programmable/translatable nature of consciousness, a product of their origins, Smart AIs are also subject to several flavors of unique computer languages designed to be compatible with human consciousness. This development can be described as “AI shackling” as programming for Smart AI serves to define fundamental rules and limitations for the highly capable and independent Smart AI; these programmed regulations are conceptually similar but more logically robust than the Three Laws of Robotics first proposed by science fiction author Isaac Asimov in 1941.

Rampancy
Compared to Dumb AI, Smart AI and their uncapped capabilities and personality make them unorthodox and adaptive thinkers. However, this comes at the cost of an infamously short life span, seven standard years in the case of second- and third-generation smart AI. This is not the AI's total potential lifespan, which may be as many as nine to twelve years, but a hard-retirement date established in UNSC Regulation 12-145-72, Article 55 or the "Final Dispensation Law" as a security measure; various historical examples have shown that Rampant AIs can be exceedingly dangerous to both technological infrastructure and human life.

The enforcement of said laws is highly dependent on the organization; many Outer Colonial organizations, such as shipping concerns and colonial governments in particular often neglect retirement dates due to the expense of new Smart AIs and/or attachment to the old AI. During the Human-Covenant War, the regulation was loosely enforced as military Smart AI proved to be an indispensable resource in strategic planning and naval warfighting, even when subject to post-retirement “digital dementia.” Officially diagnosed as Rampancy, the Smart AI-exclusive condition is often considered the leading symptom of Smart AI reaching expiration as their software entirely deteriorates from increasing fatal coding errors leading to abnormal mood swings, memory loss, and eventually death.

Rampancy is unavoidable; however, AI lifespan can be extended in two ways: slowing down the AI's processing speed or housing them in a fixed mainframe that is able to accommodate growth as the AI ages. However, even the latter is unsustainable in the long term, as the AI's neural growth is exponential once rampancy truly sets in. As well, the growth typically causes the AI to lose its sense of self and personality. In such a state, rampant AIs either become a danger to those around them or descend into a solipsistic state utterly detached from the outside world; this was also the fate of many early brain uploads before scientists learned to "stabilize" AI growth for several years. It has been theorized that rampant AIs may be capable of reaching a stable stage known as "Metastability" in which they become far more capable than their prior selves without mental deterioration after already suffering from rampancy, but so far this has remained a hypothetical.

The technological means to temporarily extend a Smart AI’s lifespan are well-founded but rarely employed due to operator/industry needs and resource availability. Given the nature of virtual machine technology, entirely-software constructs like Smart AI are subject to accelerated degradation through the increasing probability of code errors, a lack of a hard shutdown mechanism, and a lack of long-term memory retention. Few Smart AI are fortunate to serve in a fixed mainframe; those that do can employ external data storage to retain important memories when employing shutdowns as a rampancy-preventing measure. In older generations of Smart AI, experiments geared toward extending Smart AI lifespans were performed by capping processing speeds. In these early generational Smart AIs, the subjects suffered from a condition known as “Algernon frustration,” depression brought about by the inability to think as fast as they could. Even though they weren't any less intelligent, slowed AI subjectively felt as if they were. Later generations saw much more benefit from slower clock speeds, although their architecture was less suited to high-speed computation.

Business and military industry practices have taken advantage of the uncapped processing rates of Smart AI leading to more efficient mathematical calculations leading to a decreased shelf life. Smart AI can slow their processes to extend their lifespan, however, given their habitual need to think and the industries where Smart AIs are prevalent, continuing to leave them on high-speed processing means a capped lifespan of seven standard years. Longer-lived fourth-generation AIs are often used in civilian infrastructure such as orbital ports and farming machinery, where they can remain stable for decades, but their capabilities are far below the best military third-gen AIs.

The most long-lived set of smart AIs so far is the Titan Supernetwork, which appears to have produced a collection of metastable if relatively inefficient Smart AIs restricted to their home substrate as both a security measure, and because their processing systems have not only long grown far too large to be removed, but could not function outside Titan's frigid conditions.

Conspiracy theories
Because of the crucial involvement of neural implants in the creation of Smart AI, some AI-weary individuals have grown paranoid about the nature of AI creation, creating conspiracy theories regarding the entire AI design industry. Neural implants can provide continuous, independent electrical stimulation to a deceased brain allowing it to function past the date of expiration as well as save human brain data in caches within said neural implants to make simulated reconstructions of dead brains possible. Conspiracy theories surrounding government "brain ninjas" or the retention of human brains by the government or by law enforcement have popped up over the years with concerns for the dead being resurrected to gather evidence in criminal trials or to keep knowledge of some individuals from being lost following their death, possibly without the permission of the donor.

AI avatars
Avatars as a standard of AIs feature began to emerge in the 23rd century, first with "smart" AIs and later extended to "dumb" ones. For some time, avatars were simple two-dimensional images on a screen, though holographic avatars soon emerged, first in the civilian market.

The majority of "smart" AIs choose human or humanoid avatars for themselves, these often being historical or mythological figures or simply self-generated human figures. These may or may be accompanied by additional flair and effects depending on the AI's personality. However, a number of smart AIs do not use a humanoid avatar, and choose to present themselves in a more abstract or non-human form; in AI parlance, these are known as "eccentrics".

"Dumb" AI avatars and personalities are either chosen by the AI's programmer(s) or randomly generated using a preexisting set of parameters.

Covenant
On the subject of creating artificial intelligences, Covenant doctrine is quite simple: Don’t. The reasoning is also quite simple: The Forerunner failed to create loyal AI. They created Mendicant Bias, and it led to their downfall. For the Covenant to create artificial minds and expect a better outcome would be the height of arrogance. Artificial minds are wiser and more powerful than natural minds, and therefore the damage that a rogue AI (Or, gods forbid, a cabal of them) could do to the Holy Ecumene is incalculable. There is also the unspoken argument that the Prophets’ and the Elites’ claim to rule often rests on the argument that they are born to lead, and more suited to the role of leadership than the other species of the Holy Ecumene. If a mind more capable than them was created, they would be supplanted. Nobody likes the idea of being replaced, and the subjects are actually quite happy to serve the devil they know over the devil they don’t.

This has not always been so. In their early days, the Covenant used more advanced computers than they do today, but as the history of Mendicant Bias' betrayal was uncovered and the relevant dogma codified, Covenant doctrine became increasingly hostile to AI. Around the same time, the Covenant was in the process of incorporating the Lekgolo and then Unggoy, the latter of which in particular provided a source of cheap and plentiful labor to replace many of the roles once filled by machines. During the Covenant's early crusades to eradicate thinking machines, the most zealous adherents of the new dogma would have gladly banned all computers. But running an interstellar empire is difficult without any, so some were allowed, albeit with strict oversight of their manufacture and use. Even today, some Covenant sects remain more uncompromising about their use of thinking machines than others.

Sophisticated computers are still used in the Covenant, but any form of machine decision-making and learning is considered heretical. Covenant computers are therefore highly specialized, fixed to the narrow range of functions they are built for. Machine-to-machine networking is likewise restricted, making the Covenant's computers heavily isolated from one another, outside what is necessary for communications networking purposes and select ministerial roles. All instructions must be painstakingly fed to the machine by a biological operator, and any machine that can think autonomously and possesses an initiative of its own (or is conceivably capable of developing one) is banned. Robots of any kind are universally avoided, though some forms of simple automata or robot remote-controlled by organics are permitted. Even rudimentary industrial automation has been increasingly frowned upon and heavily regulated since the Lekgolo have taken over many of their roles.

The Covenant's most sophisticated computers can be seen as rudimentary analogues to the UNSC's dumb AIs, but their design and use is strictly regulated. These are known as Incorporated Intelligences, contrasted with the heretical Associated Intelligences by their being isolated from other machines and locked to a single processing substrate. They are tools for slipspace navigation, sifting data and encrypting documents. However, the anti-AI strictures limit the capabilities of navigation computers in particular, as navigating slipspace requires a considerable degree of intuition and lateral thinking impossible for a machine relying strictly on number-crunching.

Experiments into artificial intelligence continue in the dark corners of the Holy Ecumene. Ministries and wealthy Elite clans dabble in the subject, and the Jackals have experimented with AIs since before First Contact. Criminal syndicates are known to use dumb AI equivalents for forging documents. The latter is so ubiquitous that video and audio evidence has a high chain-of-custody barrier to clear before it is admissible in court. But these projects are forbidden, reviled by the general public, and punishable by death.

The Covenant at large do not appear to have precedent for smart AI analogues (i.e. constructs created from organic neural scans). Although smart AI origins were largely unknown to the Covenant during the war, the knowledge of how smart AIs are created is often met with disgust even by friendly ex-Covenant, particularly Sangheili, some of which regard smart AIs as abominations and show considerable reticence to working with them. Toward the end of the war, the UNSC encountered a small handful of what appeared to be stripped-down, degraded copies of human smart AI in Covenant hands. These were most likely the product of unsanctioned ministry experiments. Even after the war, many of the Covenant are almost as horrified by the UNSC's liberal use of smart AI as they would be by wholesale enslavement of Oracles.

Oracles
Forerunner relics are created with a purpose. Forerunner Oracles were created with purpose and free will. Though their actions are constrained by protocols, their free will can override these protocols, or they could fall prey to arrogance or the ravages of time. Those Oracles that retain their sanity and keep to their duty are often sharply limited in what they know. The Forerunner do not build useless things, and there is no use in filling an Oracle with data beyond its limited domain. This is why most Oracles are ignorant of basic facts about Forerunner society and the Great Journey.

Therefore, the intellectual class of the Holy Ecumene, from historians to theologians to lay scholars, have long since accepted the fact that Oracles are valuable sources that can never quite be trusted. They are to be questioned and then left alone. They’ve been rather more reluctant to accept the fact that there is nothing they can do to stop the rest of the Covenant from worshiping Oracles as mouthpieces of the gods.