Listing 1 - 10 of 51 | << page >> |
Sort by
|
Choose an application
Nonmonotonic reasoning provides formal methods that enable intelligent systems to operate adequately when faced with incomplete or changing information. In particular, it provides rigorous mechanisms for taking back conclusions that, in the presence of new information, turn out to be wrong and for deriving new, alternative conclusions instead. Nonmonotonic reasoning methods provide rigor similar to that of classical reasoning; they form a base for validation and verification and therefore increase confidence in intelligent systems that work with incomplete and changing information. Following a brief introduction to the concepts of predicate logic that are needed in the subsequent chapters, this book presents an in depth treatment of default logic. Other subjects covered include the major approaches of autoepistemic logic and circumscription, belief revision and its relationship to nonmonotonic inference, and briefly, the stable and well-founded semantics of logic programs.
Nonmonotonic reasoning. --- Non-monotonic reasoning --- Reasoning --- COMPUTER SCIENCE/Artificial Intelligence
Choose an application
The broad range of material included in these volumes suggests to the newcomer the nature of the field of artificial intelligence, while those with some background in AI will appreciate the detailed coverage of the work being done at MIT. The results presented are related to the underlying methodology. Each chapter is introduced by a short note outlining the scope of the problem begin taken up or placing it in its historical context. Contents, Volume I Expert Problem Solving: Qualitative and Quantitative Reasoning in Classical Mechanics - Problem Solving About Electrical Circuits - Explicit Control of Reasoning - A Glimpse of Truth Maintenance - Design of a Programmer's Apprentice - Natural Language Understanding and Intelligent Computer Coaches: A Theory of Syntactic Recognition for Natural Language - Disambiguating References and Interpreting Sentence Purpose in Discourse - Using Frames in Scheduling - Developing Support Systems for Information Analysis - Planning and Debugging in Elementary Programming - Representation and Learning: Learning by Creating and Justifying Transfer Frames - Descriptions and the Specialization of Concept - The Society Theory of Thinking - Representing and Using Real-World Knowledge.
Choose an application
This collection of essays by 12 members of the MIT staff, provides an inside report on the scope and expectations of current research in one of the world's major AI centers. The chapters on artificial intelligence, expert systems, vision, robotics, and natural language provide both a broad overview of current areas of activity and an assessment of the field at a time of great public interest and rapid technological progress.Contents: Artificial Intelligence (Patrick H. Winston and Karen Prendergast). KnowledgeBased Systems (Randall Davis). Expert-System Tools and Techniques (Peter Szolovits). Medical Diagnosis: Evolution of Systems Building Expertise (Ramesh S. Patil). Artificial Intelligence and Software Engineering (Charles Rich and Richard C. Waters). Intelligent Natural Language Processing (Robert C. Berwick). Automatic Speech Recognition and Understanding (Victor W. Zue). Robot Programming and Artificial Intelligence (Tomas Lozano-Perez). Robot Hands and Tactile Sensing (John M. Hollerbach). Intelligent Vision (Michael Brady). Making Robots See (W. Eric L. Grimson). Autonomous Mobile Robots (Rodney A. Brooks).W. Eric L. Grimson, author of From Images to Surfaces: A Computational Study of the Human Early Vision System (MIT Press 1981), and Ramesh S. Patil are both Assistant Professors in the Department of Electrical Engineering and Computer Science at MIT. AI in the 1980s and Beyond is included in the Artificial Intelligence Series, edited by Patrick H. Winston and Michael Brady.
Choose an application
"The Turing Test is part of the vocabulary of popular culture - it has appeared in works ranging from the Broadway play Breaking the Code to the comic strip "Robotman." The writings collected for this book examine the profound philosophical issues surrounding the Turing Test as a criterion for intelligence. Alan Turing's idea, originally expressed in a 1950 paper titled "Computing Machinery and Intelligence" and published in the journal Mind, proposed an "indistinguishability test" that compared artifact and person. Following Descartes' dictum that it is the ability to speak that distinguishes human from beast, Turing suggested testing whether machine and person were indistinguishable in regard to verbal ability. He was not, as is often assumed, answering the question "Can machines think?" but offering a more concrete way to ask it. Turing's thought experiment encapsulates the issues that the writings in The Turing Test define and discuss."--Jacket.
Turing test --- Artificial intelligence --- Machine theory --- CAPTCHA (Challenge-response test) --- Turing test. --- COMPUTER SCIENCE/Artificial Intelligence
Choose an application
In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner's goals. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. This book brings together a diversity of research on goal-driven learning to establish a broad, interdisciplinary framework that describes the goal-driven learning process. It collects and solidifies existing results on this important issue in machine and human learning and presents a theoretical framework for future investigations. The book opens with an an overview of goal-driven learning research and computational and cognitive models of the goal-driven learning process. This introduction is followed by a collection of fourteen recent research articles addressing fundamental issues of the field, including psychological and functional arguments for modeling learning as a deliberative, planful process; experimental evaluation of the benefits of utility-based analysis to guide decisions about what to learn; case studies of computational models in which learning is driven by reasoning about learning goals; psychological evidence for human goal-driven learning; and the ramifications of goal-driven learning in educational contexts. The second part of the book presents six position papers reflecting ongoing research and current issues in goal-driven learning. Issues discussed include methods for pursuing psychological studies of goal-driven learning, frameworks for the design of active and multistrategy learning systems, and methods for selecting and balancing the goals that drive learning. A Bradford Book.
Choose an application
"This collection of current research on logic programming languages presents results from a three-year, ESPRIT-funded effort to explore the integration of the foundational issues of functional, logic, and object-oriented programming. It offers valuable insights into the fast-developing extensions of logic programming with functions, constraints, concurrency, and objects. Chapters are grouped according to the unifying themes of functional programming, constraint, logic programming, and object-oriented programming."
Choose an application
"Visual Reconstruction presents a unified and highly original approach to the treatment of continuity in vision. It introduces, analyzes, and illustrates two new concepts. The first -- the weak continuity constraint -- is a concise, computational formalization of piecewise continuity. It is a mechanism for expressing the expectation that visual quantities such as intensity, surface color, and surface depth vary continuously almost everywhere, but with occasional abrupt changes. The second concept -- the graduated nonconvexity algorithm -- arises naturally from the first. It is an efficient, deterministic (nonrandom) algorithm for fitting piecewise continuous functions to visual data. The book first illustrates the breadth of application of reconstruction processes in vision with results that the authors' theory and program yield for a variety of problems. The mathematics of weak continuity and the graduated nonconvexity (GNC) algorithm are then developed carefully and progressively."
Pattern perception --- #TELE:MI2 --- Design perception --- Pattern recognition --- Form perception --- Perception --- Figure-ground perception --- Pattern perception. --- Engineering & Applied Sciences --- Computer Science --- NEUROSCIENCE/Visual Neuroscience --- COMPUTER SCIENCE/Artificial Intelligence
Choose an application
The effort to explain the imitative abilities of humans and other animals draws on fields as diverse as animal behavior, artificial intelligence, computer science, comparative psychology, neuroscience, primatology, and linguistics. This volume represents a first step toward integrating research from those studying imitation in humans and other animals, and those studying imitation through the construction of computer software and robots. Imitation is of particular importance in enabling robotic or software agents to share skills without the intervention of a programmer and in the more general context of interaction and collaboration between software agents and humans. Imitation provides a way for the agent -- -whether biological or artificial--to establish a "social relationship" and learn about the demonstrator's actions, in order to include them in its own behavioral repertoire. Building robots and software agents that can imitate other artificial or human agents in an appropriate way involves complex problems of perception, experience, context, and action, solved in nature in various ways by animals that imitate.
Imitation --- Learning in animals --- Machine learning --- Psychology --- Social Sciences --- Animal learning --- Mimicry --- Animal intelligence --- Influence (Psychology) --- Social influence --- COMPUTER SCIENCE/Artificial Intelligence
Choose an application
Annotation
Machine learning. --- Computer Science --- Engineering & Applied Sciences --- Machine learning --- #PBIB:2001.2 --- Learning, Machine --- Artificial intelligence --- Machine theory --- E-books --- COMPUTER SCIENCE/Artificial Intelligence
Choose an application
"Consider for a moment the layers of structure and meaning that are attached to concepts like lawsuit, birthday party, fire, mother, walrus, cabbage, or king.... If I tell you that a house burned down, and that the fire started at a child's birthday party, you will think immediately of the candles on the cake and perhaps of the many paper decorations. You will not, In all probability, find yourself thinking about playing pin-the- tall-on-the-donkey or about the color of the cake's icing or about the fact that birthdays come once a year. These concepts are there when you need them, but they do not seem to slow down the search for a link between fires and birthday parties."The human mind can do many remarkable things. One of the most remarkable Is its ability to store an enormous quantity and variety of knowledge and to locate and retrieve whatever part of it is relevant in a particular context quickly and in most cases almost without effort. "If we are ever to create an artificial intelligence with human-like abilities," Fahlman writes, "we will have to endow it with a comparable knowledge-handling facility; current knowledge-base systems fall far short of this goal. This report describes an approach to the problem of representing and using realworld knowledge in a computer."The system developed by Fahlman and presented in this book consists of two more-or-less independent parts. The first is the system's parallel network memory scheme: "Knowledge Is stored as a pattern of interconnections of very simple parallel processing elements: node units that can store a dozen or so distinct marker-bits, and link units that can propagate those markers from node to node, in parallel through the network. Using these marker-bit movements, the parallel network system can perform searches and many common deductions very quickly."The second (and more traditional) part of the knowledge-base system presented here is NETL, "a vocabulary of conventions and processing algorithms--in some sense, a language--for representing various kinds of knowledge as nodes and links in the network.... NETL incorporates a number of representational techniques--new ideas and new combinations of old ideas-which allow it to represent certain real-world concepts more precisely and more efficiently than earlier systems.... NETL has been designed to operate efficiently on the parallel network machine described above, and to exploit this machine's special abilities. Most of the ideas in NETL are applicable to knowledge-base systems on serial machines as well."
Listing 1 - 10 of 51 | << page >> |
Sort by
|