Artificial Intelligence
What is Artificial
Intelligence?
Artificial
intelligence (AI) is wide-ranging branch of computer science concerned with
building smart machines capable of performing tasks that typically require
human intelligence. AI is an interdisciplinary science with multiple
approaches, but advancements in machine learning and
deep learning are creating a paradigm shift in virtually every sector of
the tech industry.
HOW DOES ARTIFICIAL INTELLIGENCE WORK?
Less than a decade after breaking the Nazi encryption machine
Enigma and helping the Allied Forces win World War II, mathematician Alan
Turing changed history a second time with a simple question: "Can machines
think?"
Turing's paper "Computing
Machinery and Intelligence" (1950), and it's subsequent Turing
Test, established the fundamental goal and vision of artificial
intelligence.
At it's core, AI is the branch of computer
science that aims to answer Turing's question in the affirmative. It is the
endeavor to replicate or simulate human intelligence in machines.
The expansive goal of artificial
intelligence has given rise to many questions and debates. So much so,
that no singular definition of the field is universally accepted.
The major limitation in defining AI as
simply "building machines that are intelligent" is that it doesn't
actually explain what
artificial intelligence is? What makes a machine intelligent?
In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the
question by unifying their work around the theme of intelligent agents in
machines. With this in mind, AI is "the study of agents that receive
percepts from the environment and perform actions." (Russel and Norvig
viii)
Norvig and Russell go on to explore four
different approaches that have historically defined the field of AI:
1.
Thinking
humanly
2.
Thinking
rationally
3.
Acting
humanly
4.
Acting
rationally
The first two ideas concern thought
processes and reasoning, while the others deal with behavior. Norvig and
Russell focus particularly on rational agents that act to achieve the best
outcome, noting "all the skills needed for the Turing Test also allow an
agent to act rationally." (Russel and Norvig 4).
Patrick Winston, the Ford professor of
artificial intelligence and computer science at MIT, defines AI as
"algorithms enabled by constraints, exposed by representations that
support models targeted at loops that tie thinking, perception and action
together."
While these definitions may seem abstract
to the average person, they help focus the field as an area of computer science
and provide a blueprint for infusing machines and programs with machine
learning and other subsets of artificial intelligence.
While addressing a crowd at the Japan AI Experience in 2017, DataRobot CEO Jeremy Achin
began his speech by offering the following definition of how AI is used today:
"AI is a computer system able to
perform tasks that ordinarily require human intelligence... Many of these
artificial intelligence systems are powered by machine learning, some of them
are powered by deep learning and some of them are powered by very boring things
like rules."
HOW IS AI USED?
Artificial intelligence generally false
under two broad categories:
·
Narrow
AI: Sometimes referred to as "Weak
AI," this kind of artificial intelligence operates within a
limited context and is a simulation of human intelligence. Narrow AI
is often focused on performing a single task extremely well and while
these machines may seem intelligent, they are operating under far more
constraints and limitations than even the most basic human intelligence.
·
Artificial
General Intelligence (AGI):
AGI, sometimes referred to as "Strong AI," is the kind of artificial
intelligence we see in the movies, like the robots from Westworld or
Data from Star
Trek: The Next Generation.
AGI is a machine with general intelligence and, much like a human being,
it can apply that intelligence to solve any problem.
ARTIFICIAL INTELLIGENCE EXAMPLES
·
Smart
assistants (like Siri and Alexa)
·
Disease
mapping and prediction tools
·
Manufacturing
and drone robots
·
Optimized,
personalized healthcare treatment recommendations
·
Conversational
bots for marketing and customer service
·
Robo-advisors
for stock trading
·
Spam
filters on email
·
Social
media monitoring tools for dangerous content or false news
·
Song
or TV show recommendations from Spotify and Netflix
Narrow
Artificial Intelligence
Narrow AI is all around us and is easily
the most successful realization of artificial intelligence to date. With its
focus on performing specific tasks, Narrow AI has experienced numerous
breakthroughs in the last decade that have had "significant societal
benefits and have contributed to the economic vitality of the nation,"
according to "Preparing for the Future of Artificial Intelligence," a
2016 report released by the Obama Administration.
A few examples of Narrow AI include:
·
Google search
·
Image recognition software
·
Siri, Alexa and other personal assistants
·
Self-driving cars
·
IBM's Watson
Machine
Learning & Deep Learning
Much of Narrow AI is powered by
breakthroughs in machine learning and deep learning. Understanding the difference between artificial
intelligence, machine learning and deep learning can be confusing. Venture
capitalist Frank Chen provides a good overview of how to distinguish between
them, noting:
"Artificial
intelligence is a set of algorithms and intelligence to try to mimic human
intelligence. Machine learning is one of them, and deep learning is one of
those machine learning techniques."
Simply put, machine learning feeds a
computer data and uses statistical techniques to help it "learn" how
to get progressively better at a task, without having been specifically
programmed for that task, eliminating the need for millions of lines of written
code. Machine learning consists of both supervised learning (using
labeled data sets) and unsupervised learning (using unlabeled data
sets).
Deep learning is a type of machine learning
that runs inputs through a biologically-inspired neural network architecture.
The neural networks contain a number of hidden layers through which the data is
processed, allowing the machine to go "deep" in its learning, making connections
and weighting input for the best results.
Artificial
General Intelligence
The creation of a machine with human-level
intelligence that can be applied to any task is the Holy Grail for many AI
researchers, but the quest for AGI has been fraught with difficulty.
The search for a "universal algorithm
for learning and acting in any environment," (Russel and Norvig 27)
isn't new, but time hasn't eased the difficulty of essentially creating a
machine with a full set of cognitive abilities.
AGI has long been the muse of
dystopian science fiction, in which super-intelligent robots overrun
humanity, but experts agree it's not something we need to worry
about anytime soon.
HISTORY OF AI
ntelligent robots and artificial
beings first appeared in the ancient Greek myths of Antiquity. Aristotle's
development of the syllogism and it's use of deductive reasoning was a key
moment in mankind's quest to understand its own intelligence. While the roots
are long and deep, the history of artificial intelligence as we think of it
today spans less than a century. The following is a quick look at some of
the most important events in AI.
1943
·
Warren McCullough and Walter Pitts
publish "A Logical Calculus of Ideas Immanent in Nervous
Activity." The paper proposed the first mathematic model for building a
neural network.
1949
·
In his book The Organization of Behavior: A Neuropsychological
Theory, Donald Hebb proposes the theory that neural
pathways are created from experiences and that connections between neurons
become stronger the more frequently they're used. Hebbian learning continues to
be an important model in AI.
1950
·
Alan Turing publishes "Computing
Machinery and Intelligence, proposing what is now known as the Turing Test, a
method for determining if a machine is intelligent.
·
Harvard undergraduates Marvin Minsky and
Dean Edmonds build SNARC, the first neural network computer.
·
Claude Shannon publishes the paper
"Programming a Computer for Playing Chess."
·
Isaac Asimov publishes the "Three Laws
of Robotics."
1952
·
Arthur Samuel develops a self-learning
program to play checkers.
1954
·
The Georgetown-IBM machine translation
experiment automatically translates 60 carefully selected Russian sentences
into English.
1956
·
The phrase artificial intelligence is
coined at the "Dartmouth Summer Research Project on Artificial
Intelligence." Led by John McCarthy, the conference, which defined the
scope and goals of AI, is widely considered to be the birth of artificial
intelligence as we know it today.
·
Allen Newell and Herbert Simon demonstrate
Logic Theorist (LT), the first reasoning program.
1958
·
John McCarthy develops the AI programming
language Lisp and publishes the paper "Programs with Common Sense."
The paper proposed the hypothetical Advice Taker, a complete AI system with the
ability to learn from experience as effectively as humans do.
1959
·
Allen Newell, Herbert Simon and J.C. Shaw
develop the General Problem Solver (GPS), a program designed to imitate human
problem-solving.
·
Herbert Gelernter develops the Geometry
Theorem Prover program.
·
Arthur Samuel coins the term machine
learning while at IBM.
·
John McCarthy and Marvin Minsky found the
MIT Artificial Intelligence Project.
1963
·
John McCarthy starts the AI Lab at
Stanford.
1966
·
The Automatic Language Processing Advisory
Committee (ALPAC) report by the U.S. government details the lack of progress in
machine translations research, a major Cold War initiative with the promise of
automatic and instantaneous translation of Russian. The ALPAC report leads to
the cancellation of all government-funded MT projects.
1969
·
The first successful expert systems are
developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood
infections, are created at Stanford.
1972
·
The logic programming language PROLOG is
created.
1973
·
The "Lighthill Report," detailing
the disappointments in AI research, is released by the British government and
leads to severe cuts in funding for artificial intelligence projects.
1974-1980
·
Frustration with the progress of AI
development leads to major DARPA cutbacks in academic grants. Combined with the
earlier ALPAC report and the previous year's "Lighthill Report,"
artificial intelligence funding dries up and research stalls. This period is
known as the "First AI Winter."
1980
·
Digital Equipment Corporations develops R1
(also known as XCON), the first successful commercial expert system.
Designed to configure orders for new computer systems, R1 kicks off an
investment boom in expert systems that will last for much of the decade,
effectively ending the first "AI Winter."
1982
·
Japan's Ministry of International Trade and
Industry launches the ambitious Fifth Generation Computer Systems project. The
goal of FGCS is to develop supercomputer-like performance and a platform for AI
development.
1983
·
In response to Japan's FGCS, the U.S.
government launches the Strategic Computing Initiative to provide
DARPA funded research in advanced computing and artificial
intelligence.
1985
·
Companies are spending more than a billion
dollars a year on expert systems and an entire industry known as the Lisp
machine market springs up to support them. Companies like Symbolics and Lisp
Machines Inc. build specialized computers to run on the AI programming language
Lisp.
1987-1993
·
As computing technology improved, cheaper
alternatives emerged and the Lisp machine market collapsed in 1987, ushering in
the "Second AI Winter." During this period, expert systems proved too
expensive to maintain and update, eventually falling out of favor.
·
Japan terminates the FGCS project in
1992, citing failure in meeting the ambitious goals outlined a decade earlier.
·
DARPA ends the Strategic Computing
Initiative in 1993 after spending nearly $1 billion and falling far short of
expectations.
1991
·
U.S. forces deploy DART, an automated
logistics planning and scheduling tool, during the Gulf War.
1997
·
IBM's Deep Blue beats world chess champion
Gary Kasparov
2005
·
STANLEY, a self-driving car, wins the DARPA
Grand Challenge.
·
The U.S. military begins investing in
autonomous robots like Boston Dynamic's "Big Dog" and iRobot's
"PackBot."
2008
·
Google makes breakthroughs in speech
recognition and introduces the feature in its iPhone app.
2011
·
IBM's Watson trounces the competition on Jeopardy!.
2012
·
Andrew Ng, founder of the Google Brain Deep
Learning project, feeds a neural network using deep learning algorithms 10
million YouTube videos as a training set. The neural network learned to
recognize a cat without being told what a cat is, ushering in breakthrough era
for neural networks and deep learning funding.
2014
·
Google makes first self-driving car to pass
a state driving test.
2016
. Google
DeepMind's AlphaGo defeats world champion Go player Lee Sedol. The complexity
of the ancient Chinese game was seen as a major hurdle to clear in AI.
these tutorials are very good for beginners as well as the advanced learners. Thanks share few moredetails.
ReplyDeleteAi & Artificial Intelligence Course in Chennai
PHP Training in Chennai
Ethical Hacking Course in Chennai Blue Prism Training in Chennai
UiPath Training in Chennai
Thanks for your valuable feedback.
Delete