The article Intelligence explosion: Evidence and import reviews the indication for and in contradiction of three assertions on artificial intelligence

WORKS CITED

Muehlhauser, Luke, and Anna Salamon. “Intelligence explosion: Evidence and import.” Singularity Hypotheses. Springer, Berlin, Heidelberg, 2012. 15-42.

The article Intelligence explosion: Evidence and import reviews the indication for and in contradiction of three assertions on artificial intelligence: that one, a substantial chance exists that human-level Artificial Intelligence will be developed before 2100; two, in case human level Artificial Intelligence is developed, a significant chance exists that massively superhuman Artificial Intelligence systems will follow through a massive “intelligence explosion,” ; and that three, an uninhibited Artificial Intelligence explosion might lead to erosion of human values, nonetheless a regulated Artificial Intelligence expansion could benefit humankind immensely if it is achieved. The implications of the article are significant in outlining how sanctions for increase in regulated intelligence explosion comparative to an unregulated Artificial Intelligence explosion is essential.

Muehlhauser, Luke, and Louie Helm. “The singularity and machine ethics.” Singularity Hypotheses. Springer, Berlin, Heidelberg, 2012. 101-126.

The article articulates how self-improving AI (Artificial Intelligence) may end up prevailing than human input that it would not be possible to prevent the Artificial Intelligence from accomplishing its objectives. In which case, if the Artificial Intelligence’s object differs from humanity’s, then the case may well be catastrophic for humankind. The article also explores a proposal resolution which is to engineer the Artificial Intelligence’s object system to serve the wants of humans before the Artificial Intelligence self improves outside mankind’s capability to control them. Through a series of perception drives, from moral philosophy, it outlines how human values and wants are complex facets which are difficult to specify. The assessment provided by the article is important as it demarcates using partiality value and ethics theories as a potential tactic for creating an Artificial Intelligence ethics appropriate for regulating Artificial Intelligence (AI) explosion.

Bostrom, Nick, and Eliezer Yudkowsky. “The ethics of artificial intelligence, In the Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith Frankish.” (2011).

The Prospect of development of machines with thinking capability instigates a debate of several ethical questions. Such ethical issues encompass both the safeguards that ensure Artificial Intelligence based machines do not cause harm to humankind and all moral appropriate existences, and the safeguarding of the machines moral relevance. The article explores in its discussion the questions that may come up in the near future in relation to Artificial Intelligence. It outlines complex challenges that may face attempts to ensure that Artificial Intelligence Functions safely as it develops towards human-level intelligence. The article is important in highlighting how the Artificial Intelligence can be assessed (whether, and in which situations), the Artificial Intelligence has moral status. In essence, this demonstrates a consideration of the difference between Artificial Intelligence and Natural Intelligence with respect to etical issues of their determinations.

Omohundro, Stephen M. “The basic AI drives.” AGI. Vol. 171. 2008.

It is conceivable that Artificial Intelligence structures with harmless objectives will not be harmful to humans. The article contrastingly demonstrates that Artificial Intelligence based machines will have to be cautiously designed to avert the systems from acting in ways that are harmful. Identifying the determinations (drives) that will likely be included in future sufficiently advanced Artificial Intelligence systems (systems of any type of design), it demarcates them as drives, inclinations that will be incorporated in the AI systems unless explicitly countered. The article is important because it illustrates how objective-driven systems might have determinations to regulate their own operations and to self-advance. It also highlights how self-advancing systems will be determined to make clear their objectives and denote their goals as economic utility functions.

Soares, Nate, and Benya Fallenstein. “Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda.” The Technological Singularity. Springer, Berlin, Heidelberg, 2017. 103-125.

Intelligence represent one facet of humankind that has given it a dominating lead over all the other species in its environment. The article outlines how unchecked progress in artificial intelligence systems may result in Artificial Intelligence systems ultimately exceeding people in universal reasoning capacity (super intelligent system). A super intelligent system in that, the Artificial Intelligence systems may advance to be smarter than humans in virtually all fields. According to the article, this might have a vast impact on humanity. Comparable to ways that natural intelligence has facilitated humans in developing strategies and tools for colonizing their environment, super intelligent systems would possibly be able to develop tools and strategies for exercising control. The article is significant because it outlines Artificial Intelligence potential, defining essential approaches that should be considered in developing Artificial Intelligence systems that can surpass human intelligence level, or systems that can enable the manufacture of such structures.

United States. Executive Office of the President. “Artificial intelligence, automation, and the economy.” (2016).

The report outlines how development in Artificial Intelligence systems and related grounds have unlocked new market niches and vast prospects for improvement in essential fields such as the environment, social welfare, economic inclusion, energy, education and health. It illustrates how recent years, have demonstrated that machines can surpass humans in the performance of some tasks associated with intelligence, for instance features of image recognition. The article outlines that the rapid progress of artificial intelligence use will continue, reaching and perhaps exceeding human performances in more and more task functions. The article is important in demonstrating how artificial intelligence has been harnessed for progression in economic expansion.

Yudkowsky, Eliezer. “Artificial intelligence as a positive and negative factor in global risk.” Global catastrophic risks 1.303 (2008): 184.

The article Artificial intelligence as a positive and negative factor in global risk, explores the misconceptions associated with the understanding of Artificial Intelligence, with respect to negative and positive risk implications. The article outlines how the critical inference of such misconceptions are not based on the complexity of Artificial intelligence, but rather on the reason that individuals perceive to know more about AI than they essentially do. The article is significant in that it outlines an unbiased perspective of examining positive and negative risk implications of Artificial intelligence.