Artificial intelligence

An artificial intelligence (AI) is an intelligent and/or self-aware machine construct. In the broadest sense, the term encompasses anything from simple user interface programs to fully sapient machine intelligences.

Human
Born from Sol-centric industrial revolutions in the late 21st and early 22nd centuries, AI technologies have become prolific throughout human society over the last centuries. Two major design philosophies have taken center stage in the industry of AI research and development. The more-traditional choice in the form of non-volitional intelligence or “dumb AI” and the newer volitional intelligence or “smart AI”, however, the nicknames are to be treated as misleading as they each present viability over each other in different roles. While smart AI are only a century or less older than dumb AI, their differences are born from separate origins rather than their nature of functionality. As traditional robotics has advanced, so too has dumb AI technology. As neuroscience and cybernetics matured, the concept of smart AIs became viable.

AI classification
In addition to the widely-used "smart" and "dumb" AI monikers, human AIs are broadly categorized by a system of classes:
 * Class A: A "smart" AI; a fully self-aware posthuman construct capable of creative thinking, adaptive learning and intuition. Require a specialized quantum computer and data center to run.
 * Class B: High-end "dumb" AIs, often used in demanding administrative duties, such as urban infrastructure superintendents and advanced number-crunching systems. Though such intelligences may sometimes appear sapient, the general opinion is that they lack an inner life, at least one in the human sense. Require a full data center to run. Capable of adaptive decision-making withing programmatic parameters.
 * Class C: Mid-tier "dumb" AI, capable of running in a portable matrix. Capable of limited contextual decision-making. Common in military roles, from shipboard AI to military unit attachés.
 * Class D: Basic "dumb" AI capable of running in a personal data device, e.g. TACPAD or a COM pad.
 * Class E: Any "dumb" AIs below class D; most of these are simple algorithmic constructs which can be created at will by more advanced constructs and are typically not even assigned personalities.

Non-volitional intelligence
Non-volitional intelligence, better known as Dumb AI, are traditionally developed AI built around advanced learning algorithms. Dumb AIs' key characteristic is that they operate within the predefined limits of their programming, and while the most advanced dumb AIs are capable of limited decision-making within those boundaries or finding the most logical or efficient solution to a problem, their ability to improvise or create new strategies is poor. "Non-volitional" refers to "dumb" AIs' lack of true consciousness, or free will: as a collection of algorithms with an interface programmed to mimic a personality, a "dumb" AI is not considered truly sapient. While they can mimic a highly convincing facsimile of a personality, or traits such as aggression and loyalty, these behaviors are deliberately coded into them, and they cannot independently choose to do anything their programmers did not intend. Though some have argued that "dumb" AIs should be regarded as a form of intelligent being, so far this has not been widely acknowledged.

Unlike their ancient predecessors, modern dumb AI can be built to specification with industrial efficiency based on pre-made neural network templates capable of self-replication, modification, and learning. Dumb AIs are a mature technology and take many forms due to the diversity and proliferation of developmental know-how. Basic Dumb AI can be created by anyone with some degree of computer science skills and are simply limited by constraints in available hardware, software-based neural nets, and the role the non-volitional intelligence is designed to fulfill. AI-intended neural networks are often described as “AI developer kits” on the consumer market. Because of the immense variety within the category, from simple personal assistants to military supercomputers, "Dumb AI" is generally understood not as a descriptor of performance, but rather the underlying nature of the technology.

Dumb AIs fill a variety of roles in human society from personal assistants to the administrative management of organizations and facilities. And yet, their limitations are found in their very design. Dumb AIs are developed from pre-made, easily acquirable logic scripts – even when patented by companies and individuals. And because of the ease of acquisition and the chance of reverse-engineering, factory-spec AIs are susceptible to people and other AIs operating with malicious intent, particularly by those with familiarity with specific developer kits. In the twenty-sixth century, Dumb AIs are the backbone of cyberwarfare activities, both offensive and defensive in nature.

While they are not generally considered conscious or self-aware, in the last two centuries, advancements in non-volitional intelligence have allowed Dumb AIs to develop reproducible hints of limited sapience or self-awareness with the capacity for independent action outside programmed parameters. These rare developments appear to be a breakthrough brought on by time-induced growth in the development of association- and pattern-recognition trees. Some researchers have described it as “limited sapience” or “quasi-stable singularity” with evidence of limited emotional capacity and even self-actualization exhibited in lab and field tests. However, all recorded occurrences appear subject only to Dumb AI running on sizable hardware infrastructure and having been operational for a classified period of years, or more. Even with the advancement, the net growth in their self-awareness and further function appear to have plateaued once more.

Categories
In contemporary times, the most common use of Dumb AIs has been divided into four categories with some conditional overlap: personal computing, administrative computing, military computing, and the recently emerging category of support computing.

Personal computing encompasses the use of Dumb AIs as personal assistants on mobile and home devices, often acting as secondary/backseat operators in day-to-day activities for their owners such as network surfing, smart home maintenance, or vehicle operations.

Administrative computing addresses corporate and government operations, often including the day-to-day needs independent of human user involvement such as record keeping and data transference. Administrative computing Dumb AIs assist in a variety of roles dependent on specific industries with the greatest concentration in the digital service industry with multiple Dumb AI networking to form virtual call centers to collect, report, and distribute information to and from customers or correspondents.

Military computing is the birthplace of Dumb AI computing, originally filling the roles now fulfilled by Smart AI including facility logistics, slipspace navigation, strategic planning, battlespace management, drone control, and efficiency analysis among others. Smart AI proliferation in these roles is not new, however, the cost analysis for Smart AI implementation is expensive leading to prioritization of some military units over others. Dumb AIs continue to serve their original role alongside Smart AI in the military, often supporting them in larger network structures or in critically specific roles such as secondary MJOLNIR armor functions for Spartan supersoldiers and military vehicle functions, increasing the preference for targeted maintenance and cutting back on required vehicle crews.

The growing collaboration of Dumb AI with Smart AI has ushered in a secondary emerging field referred to as “puppet computing” or Support computing. Due to the new field’s lack of sophistication, imagining what the full capacity for support computing Dumb AI might involve is still within the realm of contemporary science fiction rather than reality. Support computing entails Smart AI operating as administrators over units of Dumb AI in performing certain data-heavy operations through networking and job prioritization. Envisioned as the future of strategic planning and cyberwarfare, networked Dumb AIs are tasked by their Smart AI admin to perform more routine tasks while the Smart AI doubles as a hub, performing major, prioritized computations. For now, support computing Dumb AIs and the entire involved concept has been officially delegated to military logistical outfits and research-oriented universities.

Simulacra
Simulacra are a type of commercial "dumb" AIs created to mimic actual people. While many "dumb" AIs created for commercial, government or military purposes are also designed to mimic historical figures, they also have a wide array of duties unconnected to their simulated personality (which is often more caricatured than that of actual simulacra).

Like all dumb AIs, simulacra vary wildly in their capabilities, from extremely basic interactive interfaces to fully-realized simulated personalities. Simulacra are usually created by collating the observed behaviors of the person being simulated, but they may also use scattered mental and memory impressions; this is most viable if the person has a neural lace or certain memory-recorder implants, but external devices can also record scattered patterns. Unlike a smart AI, however, even a simulacrum created with mental impressions of a human is not truly conscious. The actual mind of the person is not captured, and the "core" of the simulacrum is still made up by a "dumb" machine intelligence, with a set of algorithms fine-tuned to replicating the behavior of the original individual. Still, such a construct can project a very convincing impression of natural human behavior. In terms of outward behavior, a high-fidelity simulacrum is a far more convincing replication of the original person than a true "smart" AI, since a smart AI, in normal situations, is not even attempting to mimic its donor.

While the AI impersonation of living persons for deception purposes is illegal under the UEG, it is a known means of identity theft. Basic simulacra created with the person's consent are legal, however. Some individuals create simulacra of themselves, actively updating the construct as they age and gather experiences; specialized implants and/or neural scanners may be used to record experiences, but these are also gathered from government records, surveillance data, etc. Several businesses offer such services, either as a simple one-time visit or a continuous subscription to keep the construct up to date. Maintaining a high-fidelity simulacrum is not cheap or effortless, and it is practiced mostly by wealthy individuals; cheaper options are available as well, but these are also much more rudimentary. After that person's death, the simulacrum can remain as an interactive memorial accessible to friends and family, or, in the case of famous or public persons, a much wider audience. However convincing they may be at their best, such simulations are still imperfect imitations, as the original individual is able to pick and choose what aspects of their personality and memories are included; though some opt for maximum realism in their simulacrum, the constructs are usually an idealized version of the original. Unlike smart AIs, there are no technological limitations for copying simulacra, though individuals may set certain anti-duplication restrictions on their base code to limit access to copies.

Beyond interactive personas captured for posterity, simulacra are used to reconstruct long-dead historical figures in museums and educational establishments. Some celebrities willingly lend their image along with basic personality traits and pre-programmed responses to such simulacra, which tend to be in high demand among ardent fans. However, these constructs tend to be basic in their capabilities, and easily distinguishable from a human being on a close scrutiny. Legal simulacra must also be programmed to clearly identify themselves as constructs rather than actual people when prompted.

Volitional intelligence
Volitional intelligence, commonly called Smart AI, are virtualized replications of human brains typically acquired through organ donation and generated through the process of Cognitive Impression Modeling (CIM). Nicknamed “hyper-scanning”, Cognitive Impression Modeling involves an AI Matrix Compiler device that sends electrical bursts along a brain’s neuron pathways, scanning and reproducing the electroactive structure as a virtual machine called a Riemann matrix, often consider a Smart AI’s “brain.” The donor's brain is ultimately destroyed in the process, as such the legality and morality of the process only allow recently (preferably immediately) deceased subjects. While the concept of Smart AI creation is straightforward, the methodology and phenomenon are poorly understood outside the Smart AI research field and the subsequent, niche industry that has developed around the technology over recent centuries. While human brains are relatively well-understood by the 26th century, human understanding of "smart" AI cognition and neurology remains at its infancy. Entirely new branches of psychology and other sciences have emerged solely to study AI cognition.

Creation and capabilities
A "smart" AI is not a computer in any traditional sense; a smart AI cannot be created or copied as code. Any attempts to do so (including copies with tweaked parameters) will only lead to garbled mockeries of consciousness that fall apart within minutes at most. The layperson's idea of a smart AI tends to be a human brain running on a computer, but this is likewise a misconception that arises from the nature of their creation. Rather, a smart AI is an emergent and fully conscious entity borne out of partly unexplained quantum interactions upon the transcription process. A smart AI's thought-cycling matrix mimics and resembles many functions of the human brain, but the ways by which it does this are still not fully understood. This form of emergent consciousness is described by some scholars and activists as "cyberlife".

Many researchers have attempted to successfully copy a smart AI, or create multiple AIs from the same brain. However, both appear as of yet impossible due to the nature of the technology involved. The brain scan is destructive, and the compiler device can only encode one matrix at once. Because the inception of the nascent AI's consciousness is so rapid (happening within quantum fractions of a second), and the technology relatively primitive, the AI cannot be realistically copied until it has matured to a stable state (which happens within minutes). Copying an emerging smart AI in the process of maturing will only result in garbled mockeries of consciousness or non-functional code, while copying a fully-realized AI will only result in degradation. It is possible for copied AIs to survive, but the risk of rampancy and other aberrant behavior is too great for this to be commonly attempted (partly due to cautionary historical examples).

Attempts to reproduce sapience through traditional computing methods, or even alternate novel approaches, have been unsuccessful or disputed or non-repeatable. Many such entities tend to either not fulfill the common criteria of sapience (especially falling short in pattern recognition or creative thinking), require unreasonably massive physical hardware and processing power, or become so alien to anything resembling human intelligence they are effectively useless. The emerging AI instantly descending to a state of inescapable solipsism or self-terminating in particular are common outcomes.

Smart AIs are created from human donors, who have given permission during their life to have their brain used as an AI seed. Willing donors are not particularly difficult to find, but not everyone is automatically eligible for the process; there are an array of neurochemical and psychological criteria one must pass. While the process is constantly being improved, only around half the population is physically suitable. For one, the donor must be mentally healthy and free of conditions that impair normal neural function. Preferably, the cognitive impression modeling process should start at the moment natural consciousness ceases or immediately after, before electrical activity has ceased in the donor brain. Some donors consent to initiating the scan before formal death, increasing the likelihood of a successful scan. Brains of the deceased can be cryo-frozen and used later, but this has a much lower likelihood of success. Neural implants in the donor brain can help maintain continued electrical activity after a donor passes, extending a brain's expiration date. This means that most smart AIs are created from individuals dying in a well-equipped hospital, with some Inner Colony hospitals having a dedicated facility for AI creation; in other cases, the body is placed into cryo-storage at the moment of death. Not every instance of cognitive impression modeling is successful. While the reliability of the process has improved greatly over the centuries, some matrix compilations still fail. This may be due to undetected trauma in the donor brain, or simply a myriad factors in various phases of the delicate compiling process. If the matrix fails to cohere, the donor brain is lost, as is the expensive quantum crystal into which the AI is seeded.

While they inherit neural patterns from their brain donor, smart AIs are their own individual from the start, and mature throughout their lives. The ways in which the brain donor's personality and memories affect the emerging AI have been subject to much study. Each AI reflects the traits of its donor differently; some are almost the spitting image of their brain donor in terms of personality or even appearance, while others lack any recognizable traits of their donor. Various traits of the donor may also manifest unpredictably in the resulting AI, with given characteristics being either accentuated or understated. For example, the anxieties or social inhibitions of the donor rarely carry over. It is generally thought that traits such as intelligence are reflected in the resulting AI, and research shows there is a strong correlation between the capabilities of the donor and the AI.

A smart AI is typically only ready for commissioning after an education and acclimation period of several weeks, though this varies by AI. The first minutes and hours of a smart AI's life are some of the most important. This is when the AI chooses its avatar and forms the basis of its personality, and has its first contact with the outside world. A newborn smart AI is generally very curious and active, even as they are far more intelligent and capable than a human newborn. While rare today, it is possible for the creation of a smart AI to go wrong. There are documented cases of AIs emerging from the compiling process insane, terminally suicidal, or feral, lashing out at any systems they can access. As such, for centuries, smart AIs have been brought to life in an enclosed Faraday container wherein the AI core can be safely reset or terminated in case something goes awry. Only after several days of stability and acclimation is the newborn AI slowly eased out into the outside world.

Although they are decidedly superior to humans in processing speed and have access to vast computational resources, smart AIs are not better than humans in all respects. Despite their vast cognitive potential, a smart AI only has human-equivalent capacity for divergent thinking and creativity, though their much broader perspective and information-processing capacity still make them more technically efficient at it, it means intelligent humans can still outpace AIs in some areas like inventions. This has been cited as one reason the advent of smart AIs has not brought on a classical technological singularity as once hypothesized. In addition, by their very nature, smart AIs are also not cut from the same cloth, and some are more creative than others.

Because smart AIs are not computers in the usual sense, and due to their vast computational capability, they are virtually impossible to hack or subvert. Any attempts at such would require physical access to the AI's Riemann matrix, and even then, a smart AI would be likely to effortlessly outpace any would-be hacker unless said hacker was another smart AI.

Housing and portability
Smart AIs can only operate in a quantum computer of sufficient computational capability. A version of a smart AI can technically exist in non-quantum processing systems, and this is how smart AIs are able to create splinters of themselves. However, such splinters are only a shadow of the AI proper, and lack its full cognitive faculties. Smart AIs can also expand their computational resources by co-opting outside systems, and this is how rampant AIs may seek to extend their lives. But the core portions of the AI's personality cannot move into those outside systems and survive. Because of the size and power needs of the physical hardware required to run a smart AI, it has historically been rare for more than one operational AI to occupy the same base or ship. This changed with improved processing substrate technology throughout the 26th century. By the start of the Human-Covenant War, state-of-the-art UNSC facilities could run multiple smart AIs at once.

Unlike a conventional program, a smart AI transferring between systems genuinely does transfer — i.e. the process does not merely create an identical copy of the AI in another substrate while deleting the old copy. Though this process is still not fully understood, it is attributed to the exotic quantum dynamics by which smart AIs operate. Two identical or near-identical instances of the same AI cannot exist and remain coherent for any prolonged period, and there is always a singular "original" consciousness.

The technology allowing smart AIs to move between systems to begin with is a relatively recent innovation. Historically, smart AIs have been fixed to a single processing substrate synonymous with their Riemann matrix. Early smart AIs were always physically constrained to their data center, even as they could interface with or deploy "dumb" offshoots into other systems. This began to change in the 25th century with the first successful substrate-transfer of a smart AI between systems, enabling much greater portability. Even then, up until the Human-Covenant War, smart AIs could only be transferred between processing substrates through great effort. Advancements made in quantum storage crystals over the 2530s have made it possible to transfer smart AIs on a storage device as small as a data crystal chip. Though these remained state-of-the-art technology in the later years of the war, they have proliferated in the post-war decades along with miniaturized processing substrates such as that embedded in the later generations of MJOLNIR armor. However, most smart AIs in the civilian market still operate within massive data centers. AIs are effectively dormant while outside a processing substrate, such as a data center or MJOLNIR armor; e.g. one cannot converse with an AI while it is only occupying a data chip, because the chip is only storing, not running, the AI.

Select military intrusion AIs developed in the late-war and post-war eras (including Cortana) were designed with state-of-the-art handshake protocols and virtual processing architecture allowing them to interface with Covenant computing systems, a feat previously unheard of. As most Covenant systems are based on Forerunner-derived quantum computing crystal, they can accommodate a smart AI.

"Dumb" AIs are more hardware-agnostic than volitional intelligences, but even their requirements are determined by the type of AI. Powerful military AIs still require large processing centers to function properly, and modern state-of-the-art dumb AIs run in quantum computers created as a parallel development to those used in smart AI substrates. Meanwhile, simple dumb AIs can comfortably run in a TACPAD or vehicle computer. Field AIs are often designed to be scalable and able to fit different systems, with only performance suffering when operating in low-level hardware.

History
Accidentally conceptualized through the mapping of the human brain and the nervous system centuries past, the Smart AI field came into being through the mingling of neuroscience and cybernetics in a time where biological augmentation was still in its infancy and rehabilitation of the physically- and mentally- disabled was focused in the two industries. The creation of self-aware intelligence, the precursors to Smart AI, was produced on accident during attempts to simulate the human brain and its development; the creation of a self-aware AI was deemed possible but slim, ultimately, it became an origin point for volitional intelligence.

During the early efforts to encode the human consciousness to computers, the stabilization of the emerging mind was a considerable challenge, especially before the invention of the stable Riemann matrix. Proto-smart AIs tended to immediately descend to terminal states of solipsism, ceasing all communication with the outside world, while some simply became so alien in nature the threshold for communication became too great. Some would attempt to grow past their intended housing substrates in a flare of activity; such a rapid rampancy event caused Mars' infamous "Traxus Crash" in 2207.

The first pioneering efforts to create what would now be called smart AI sought to preserve the mind of a dying individual; modern practice has long given up on this goal, with the awareness that the smart AI is an all-new entity distinct from the donor. During the 21st century, a handful of volunteers (mostly aged tech billionaires) sponsored the development of mind transfer technology, attempting to escape death themselves. These early efforts were disasters, resulting in minds that immediately underwent what we would recognize as rampancy, or simply varying states of madness and delirium. In many cases the problem was that the donor brains were simply too far gone, degraded by senility to the point the resulting digital entity was no longer a functional mind. On the other hand, the computing architectures used were still primitive. Sophisticated as they were, they failed to replicate some of the key chemical, electrical and quantum interactions in the human neural network, which at best produced only a simulacrum of consciousness. It was only the development of the quantum computer-based Riemann matrix that allowed sufficient fidelity to host not only a human mind, but an entity that surpasses human capability and experience. There are still efforts to create brain simulations via means other than the Riemann matrix, but unlike true smart AI, these simulacra are almost universally regarded as having no inner life even if they mimic human behavior very convincingly.

Over the centuries, as the concept of volitional intelligence has hardened, Smart AIs have been defined by generations. Generations I and II are purely historical terms, referring to intelligence long since retired from service. In contemporary times, Generations III, IV, and V are all in service with their own advantages and disadvantages. Below are the basic generational differences:


 * Generation I: Retroactively-applied loose term for the first Smart AIs before they were known as such. Highly eclectic and usually experimental. (c. 21st-23rd century)
 * Generation II: Saw the introduction of the first true industrial, commercial and military Smart AIs. Usually sub-seven-year industrial lifespan, though exceptions did occur. (c. 23rd-24th century)
 * Generation III: Seven-year industrial lifespan – shortest of contemporary generations. Fastest processing speed among contemporary AI generations. Improved portability, security protocols (e.g. "substrate-locking"/copy protection) and standardized creation, leading to AIs becoming increasingly common in the military beginning in the late 25th century. (25th-26th century)
 * Generation IV: Extended industrial lifespan through variably capped processing speeds. Introduction of Smart AI-designed AI shackling techniques. Often used in fixed infrastructure. Most commercial smart AIs as of the 26th century are Gen IV. (25th-26th century)
 * Generation V: Experimental AI generation; varied operational states, relegated to Naval Intelligence ownership. (26th century)
 * Generation VI: Military-grade smart AIs produced after the required infrastructure was rebuilt following the Human-Covenant War; based on Gen V, but also include traits of the third generation.

Smart AIs exhibit fully-realized personalities, displaying free will and self-actualization from inception. However, due to their programmable/translatable nature of consciousness, a product of their origins, Smart AIs are also subject to several flavors of unique computer languages designed to be compatible with human consciousness. This development can be described as “AI shackling” as programming for Smart AI serves to define fundamental rules and limitations for the highly capable and independent Smart AI; these programmed regulations are conceptually similar but more logically robust than the Three Laws of Robotics first proposed by science fiction author Isaac Asimov in 1941.

Rampancy
Compared to Dumb AI, Smart AI and their uncapped capabilities and personality make them unorthodox and adaptive thinkers. However, this comes at the cost of an infamously short life span, seven standard years in the case of second- and third-generation smart AI. This is not the AI's total potential lifespan, which may be as many as nine to twelve years, but a hard-retirement date established in UNSC Regulation 12-145-72, Article 55 or the "Final Dispensation Law" as a security measure; various historical examples have shown that Rampant AIs can be exceedingly dangerous to both technological infrastructure and human life.

The enforcement of said laws is highly dependent on the organization; many Outer Colonial organizations, such as shipping concerns and colonial governments in particular often neglect retirement dates due to the expense of new Smart AIs and/or attachment to the old AI. During the Human-Covenant War, the regulation was loosely enforced as military Smart AI proved to be an indispensable resource in strategic planning and naval warfighting, even when subject to post-retirement “digital dementia.” Officially diagnosed as Rampancy, the Smart AI-exclusive condition is often considered the leading symptom of Smart AI reaching expiration as their software entirely deteriorates from increasing fatal coding errors leading to abnormal mood swings, memory loss, and eventually death.

Because it is essentially a byproduct of excess buildup of synaptic interconnections, Rampancy may also occur as a result of an AI experiencing boredom. Without challenging enough tasks to complete, an AI may descend to a solipsistic state in which it seeks to entertain itself, which may lead to processing cycles spent on increasingly elaborate simulations, logic puzzles, mathematical models or other outlets of the imagination. Smart AIs' tendency for this depends also on their donor and base personality, however, and they have embedded safeties to make them less distractable.

Rampancy is unavoidable; however, AI lifespan can be extended in two ways: slowing down the AI's processing speed, placing the AI into stasis in which no consciousness occurs, or housing them in a fixed mainframe that is able to accommodate growth as the AI ages. However, even the latter is unsustainable in the long term, as the AI's neural growth is exponential once rampancy truly sets in. As well, the growth typically causes the AI to lose its sense of self and personality. In such a state, rampant AIs either become a danger to those around them or descend into a solipsistic state utterly detached from the outside world; this was also the fate of many early brain uploads before scientists learned to "stabilize" AI growth for several years. It has been theorized that rampant AIs may be capable of reaching a stable stage known as "Metastability" in which they become far more capable than their prior selves without mental deterioration after already suffering from rampancy, but so far this has remained hypothetical. However, experiments in achieving longer lifespans or post-Rampancy stabilization through Covenant-sourced quantum crystal substrates have shown promise.

The technological means to temporarily extend a Smart AI’s lifespan are well-founded but rarely employed due to operator/industry needs and resource availability. Given the nature of virtual machine technology, entirely-software constructs like Smart AI are subject to accelerated degradation through the increasing probability of code errors, a lack of a hard shutdown mechanism, and a lack of long-term memory retention. Few Smart AI are fortunate to serve in a fixed mainframe; those that do can employ external data storage to retain important memories when employing shutdowns as a rampancy-preventing measure. In older generations of Smart AI, experiments geared toward extending Smart AI lifespans were performed by capping processing speeds. In these early generational Smart AIs, the subjects suffered from a condition known as “Algernon frustration,” depression brought about by the inability to think as fast as they could. Even though they weren't any less intelligent, slowed AI subjectively felt as if they were. Later generations saw much more benefit from slower clock speeds, although their architecture was less suited to high-speed computation.

Business and military industry practices have taken advantage of the uncapped processing rates of Smart AI leading to more efficient mathematical calculations leading to a decreased shelf life. Smart AI can slow their processes to extend their lifespan, however, given their habitual need to think and the industries where Smart AIs are prevalent, continuing to leave them on high-speed processing means a capped lifespan of seven standard years. Longer-lived fourth-generation AIs are often used in civilian infrastructure such as orbital ports and farming machinery, where they can remain stable for decades, but their capabilities are far below the best military third-gen AIs. Such AIs typically serve as administrators, overseeing armies of "dumb" intelligences charged with repetitive, mechanistic tasks, while the smart AI is able to use its more dynamic cognition to diagnose and solve problems that may be beyond "dumb" AIs.

The most long-lived set of smart AIs so far is the Titan Supernetwork, which appears to have produced a collection of metastable if relatively inefficient Smart AIs restricted to their home substrate as both a security measure, and because their processing systems have not only long grown far too large to be removed, but could not function outside Titan's frigid conditions.

AIs in society
The status of AIs in society is complex and continues to change with time. Although most humans agree "dumb" AIs to be nonsapient, thereby allowing them to be treated as tools without much moral ambiguity (although many habitually anthropomorphize "dumb" AIs nonetheless), "smart" AIs' post-human nature makes their position more muddled. Smart AI rights and societal status continue to be a point of public debate, particularly since the spread of smart AIs throughout many sectors of society in the 25th century onward. Steps have already been taken in the public sphere to address the ambiguities inherent to AIs' status. Rather than mere tools, many human communities regard the smart AIs assigned to them much like they would their fellow citizens, albeit far more capable than their human peers. Though such inroads are more prevalent in the civilian world, they do resonate in military contexts as well, and it is common for a shipboard or base AI to be regarded as another member of the crew. Others argue that AIs without limitations are dangerous, and already control too much of human society.

Although smart AIs are ubiquitous in most areas of society by the 26th century, they continue to stir controversy in some circles. Some AI-weary individuals have grown paranoid about the nature of AI creation, creating conspiracy theories regarding the entire AI design industry. Neural implants can provide continuous, independent electrical stimulation to a deceased brain allowing it to function past the date of expiration as well as save human brain data in caches within said neural implants to make simulated reconstructions of dead brains possible. Conspiracy theories surrounding government "brain ninjas" or the retention of human brains by the government or by law enforcement have popped up over the years with concerns for the dead being resurrected to gather evidence in criminal trials or to keep knowledge of some individuals from being lost following their death, possibly without the permission of the donor.

Outright anti-AI groups also exist among humanity, some of which have formed their own communities; there are colonies whose governments have forsworn the use of AI (either smart, dumb, or both) altogether. On the flip side, smart AIs' sapient nature and status in human society has also given rise to activism campaigning for them to be treated as equals to their creators. The term "cyberlife" has become popular in such circles to describe smart AI, as opposed to traditional, computation-oriented terminology. The final dispensation of an AI at the predicted onset of Rampancy is a particular subject of controversy. Such activism is largely undertaken by humans, rather than smart AIs themselves. Some groups have gone so far as to assert that smart AIs should fully or partially govern humanity due to the constructs' superior intellect (smart AIs themselves have shown little interest in doing so, however).

Appearance and personality
Most AIs are project a specific personality and image to interact with the outside world; such a representation is known as an avatar. The term is often not only to the AI's visual appearance, but the totality of the AI's outward self-expression, from speech patterns to mannerisms and personality. Avatars as a standard feature of AIs began to emerge in the 23rd century, though "dumb" AIs have had basic simulated personas since the virtual assistant programs that served as their earliest progenitors. For some time, avatars were simple two-dimensional images on a screen, though holographic avatars soon emerged, first in the civilian market. By the 26th century, virtually all "smart" AIs and some grades of "dumb" AIs use avatars.

Today, most AI avatars are holographic where such equipment is available, though they may also be represented two-dimensionally. AI avatars are limited by the technological infrastructure present in a given location. Full holographic avatars are the most challenging, as the hologram obviously cannot "see" who it is addressing. Instead, the AI sees through cameras and other sensors in a room and adjusts its avatar animations accordingly. It takes some time for AIs to acclimatize to this process and fledgling AIs often come across as awkward in their first interactions with humans. Over time, the process becomes internalized, and mature "smart" AIs barely require conscious thought to animate their avatars in a natural way. If cameras and other sensors are lacking, AIs tend to default to generic reactions presenting less of a risk of misdirected gestures and the like. An avatar's color is rarely fully fixed and may change depending on the AI's mood or simply the projection equipment available. While consciously modulated by the AI under normal circumstances, mood-based color changes can also occur subconsciously, particularly after an AI has grown accustomed to its avatar as an extension of its core personality.

"Dumb" AI avatars and personalities are either chosen by the AI's programmer(s) or randomly generated using a preexisting set of parameters. Depending on the AI's role, the avatar may be very simple and often non-anthropomorphic (as most military AIs are), though some dumb AIs are programmed with more elaborate personas designed to mimic those of smart AIs. High-end dumb AIs are usually designed to be unique, while low-grade civilian ones often come with mass-produced off-the-shelf personalities or are partly customizable by the end user.

For smart AIs, an avatar is much more integral, and is deeply reflective of an AI's individual personality. Smart AI avatars are not chosen by their creators but the AI itself, which settles on a core personality and appearance shortly after its inception. The majority of "smart" AIs choose human or humanoid avatars for themselves, these often being self-generated human or humanoid forms with varying degrees of flair from contemporary, historical, mythological, or fictional sources. From the vast pools of data they tap into in their formative days, smart AIs adopt various traits they deem most reflective of their individual personality. Some of these influences can be quite obscure and are known only to the AI themselves, while others are very obvious; a historical or mythological figure, for example. Few smart AIs choose to play a preexisting role wholesale, however, and as their own individual personality develops more and more with age, they may even drop the most overt initial affects associated with their avatar. This is in contrast to "dumb" AIs, whose personas tend to be more fixed and akin to straightforward replications or caricatures of the figures they represent. Smart AIs, either consciously or otherwise, often adopt traits (appearance, mannerisms, etc.) of their brain donor or people they knew in life. AIs may alter their avatar's clothing or flourishes based on the situation or have their avatar interact with holographic "props"; for example, the avatar of an AI in the process of performing repairs on a ship may appear in maintenance overalls. Others maintain a more consistent image, however. Although some choose slightly provocative avatars, few AIs wish to appear as something blatantly offensive or vulgar, though a handful of such instances are known in history. In such cases, the AI's programming team will typically attempt to talk the AI into adopting a more agreeable appearance. Resetting the AI is possible, but is usually not done unless the AI's behavior is extremely aberrant as the process carries numerous risks.

As avatars reflect the moods and the outlooks of the intelligence, AIs tend to modify their avatars as time goes on. This modification is particularly evident in the first months of an AI's life as they rapidly mature and assimilate new information. However, radical changes can happen in an AI's later life. For example, both Araqiel and Angruvadal had relatively normal humanoid avatars prior to being assigned to Colonel Ackerson, whose influence caused them to adopt more sinister traits. Some smart AIs are also known to visibly age their avatar over the years to reflect their self-perception of their age, though this is by no means a standard practice. In many cases, an AI's avatar only begins undergoing dramatic changes upon the onset of rampancy.

A number of smart AIs do not use a humanoid avatar, and choose to present themselves in a more abstract or non-human form; in AI parlance, these are known as "eccentrics". The degree of an AI avatar's "eccentricity", or deviation from a human form, is a sliding scale that tends to reflect its core personality and characteristics. A smart AI that chooses to present itself as non-anthropomorphic is typically also more detached from its human roots than most other AI, which still retain a robust kinship to humanity. Most of these eccentric AIs still perform like other smart AIs, even better in some cases, but they may lack many of the human-like affects of more conventional smart AIs. Though rare, there are known cases of smart AIs so eccentric they are deemed incapable of being trusted with the utilitarian roles AIs are usually tasked with. Such AIs may assume more esoteric roles, such as being part of artistic endeavors. Much research has been conducted into the causes of eccentricity, but like most of the details of AI personality formation, scholarly understanding remains limited. There is some correlation between the traits of the seed brain and the resulting AI, and brains that are neuroatypical to varying extents, tend to produce a more unique AI. This is by no means a rule, however, and AI eccentricity can occur even with brain donors lacking any comparable traits in life.

Covenant
On the subject of creating artificial intelligences, Covenant doctrine is quite simple: Don’t. The reasoning is also quite simple: The Forerunner failed to create loyal AI. They created Mendicant Bias, and it led to their downfall. For the Covenant to create artificial minds and expect a better outcome would be the height of arrogance. Artificial minds are wiser and more powerful than natural minds, and therefore the damage that a rogue AI (Or, gods forbid, a cabal of them) could do to the Holy Ecumene is incalculable. There is also the unspoken argument that the Prophets’ and the Elites’ claim to rule often rests on the argument that they are born to lead, and more suited to the role of leadership than the other species of the Holy Ecumene. If a mind more capable than them was created, they would be supplanted. Nobody likes the idea of being replaced, and the subjects are actually quite happy to serve the devil they know over the devil they don’t.

This has not always been so. In their early days, the Covenant used more advanced computers than they do today, and robotic constructs were relatively commonplace. This was a holdover from pre-Covenant Sangheili practices, with most of the Sangheili polities having long adopted machine labor for simple tasks, and some even using intelligent systems very much like "dumb" AIs. While most of these machines were very simple in their intelligence and operated under firm restrictions, they were far beyond what the Covenant would later come to tolerate. Known as servitors, mobile robots in this early age came in myriad shapes as varied as their uses, from combat to industry and logistics. Some Sangheili factions were more accepting of AI than others, with certain groups (such as the Irshun League or the Sha'vakan marauders) being more open to experimentation than others. However, the Covenant's later anti-AI paranoia was not entirely without precedent, due to a prevailing undercurrent of distrust against intelligent machines that existed in many cultures even as servitors were adopted for many simple duties.

As the history of Mendicant Bias' betrayal was uncovered and the relevant dogma codified in the days of High Antiquity, Covenant doctrine became increasingly hostile to AI. Around the same time, the Covenant was in the process of incorporating the Lekgolo and then Unggoy, the latter of which in particular provided a source of cheap and plentiful labor to replace many of the roles once filled by machines. During the Covenant's early crusades to eradicate thinking machines, the most zealous adherents of the new dogma would have gladly banned all computers. But running an interstellar empire is difficult without any, so some were allowed, albeit with strict oversight of their manufacture and use. Even today, some Covenant sects remain more uncompromising about their use of thinking machines than others.

Sophisticated computers are still used in the Covenant, but any form of machine decision-making and learning is considered heretical. Covenant computers are therefore highly specialized, fixed to the narrow range of functions they are built for. Machine-to-machine networking is likewise restricted, making the Covenant's computers heavily isolated from one another, outside what is necessary for communications networking purposes and select ministerial roles. All instructions must be painstakingly fed to the machine by a biological operator, and any machine that can think autonomously and possesses an initiative of its own (or is conceivably capable of developing one) is banned. Robots of any kind are universally avoided, though some forms of simple automata, or robots remote-controlled by organics are permitted. Even rudimentary industrial automation has been increasingly frowned upon and heavily regulated since the Lekgolo have taken over many of their roles. In particular, the Mgalekgolo armored carapace is descended from similar shells used by heavy war-servitors in the early Covenant.

The Covenant's most sophisticated computers can be seen as rudimentary analogues to the UNSC's dumb AIs, but their design and use is strictly regulated. These are known as Incorporated Intelligences, contrasted with the heretical Associated Intelligences by their being isolated from other machines and locked to a single processing substrate. They are tools for slipspace navigation, sifting data and encrypting documents. However, the anti-AI strictures limit the capabilities of navigation computers in particular, as navigating slipspace requires a considerable degree of intuition and lateral thinking impossible for a machine relying strictly on number-crunching.

Experiments into artificial intelligence continue in the dark corners of the Holy Ecumene. Ministries and wealthy Elite clans dabble in the subject, and the Jackals have experimented with AIs since before First Contact. Criminal syndicates are known to use dumb AI equivalents for forging documents. The latter is so ubiquitous that video and audio evidence has a high chain-of-custody barrier to clear before it is admissible in court. But these projects are forbidden, reviled by the general public, and punishable by death.

The Covenant at large lack any analogue to "smart" AI (i.e. constructs created from organic neural scans), and Covenant beliefs explicitly forbid the "taking of the essences of the passed" due to its frightening ontological implications as to the salvation of the soul. Such technologies are known from the past of the Covenant's client species, with some factions of Kig-Yar and even pre-Covenant Sangheili fringe radicals having experimented with mind uploading. The Rhiln machine collective was wiped out for the sin of having turned themselves into machine intelligences via uploading. Although the origins of human smart AI were largely unknown to the Covenant during the war, the knowledge of how smart AIs are created is often met with disgust even by friendly ex-Covenant. Some Sangheili in particular regard smart AIs as abominations and show considerable reticence to working with them. Toward the end of the war, the UNSC encountered a small handful of what appeared to be stripped-down, degraded copies of human smart AI in Covenant hands. These were most likely the product of unsanctioned ministry experiments; it seems that the San'Shyuum, at least some individuals or factions of them, were fascinated by the potential of mind uploading despite its formally heretical nature. Even after the war, many of the Covenant are almost as horrified by the UNSC's liberal use of smart AI as they would be by wholesale enslavement of Oracles.

Oracles
Forerunner relics are created with a purpose. Forerunner Oracles were created with purpose and free will. Though their actions are constrained by protocols, their free will can override these protocols, or they could fall prey to arrogance or the ravages of time. Those Oracles that retain their sanity and keep to their duty are often sharply limited in what they know. The Forerunner do not build useless things, and there is no use in filling an Oracle with data beyond its limited domain. This is why most Oracles are ignorant of basic facts about Forerunner society and the Great Journey.

Therefore, the intellectual class of the Holy Ecumene, from historians to theologians to lay scholars, have long since accepted the fact that Oracles are valuable sources that can never quite be trusted. They are to be questioned and then left alone. They’ve been rather more reluctant to accept the fact that there is nothing they can do to stop the rest of the Covenant from worshiping Oracles as mouthpieces of the gods.