Course Title: Artificial Intelligence
Course Code: CMP 405
Introduction
This is a three-credit unit course offered by the 400-level students of the degree programme in Cybersecurity, Information Technology, Software Engineering and Computer Science. Artificial Intelligence (AI) has become a vital process in automation requiring intelligent acts in various machines. AI is now applied in various fields and it’s now a field reckoned with by all fields.
The overall aim of CMP 405 is to Artificial Intelligence and the various faculties in it. In this course content you will find useful details about this course, its aims and objectives, what the course is all about, course materials to be used, available services in support of this course, and details on assignments and examinations. You should check regularly for updates. I wish you all the best in your learning and completing this course.
Course Aim
This course aims to introduce the students to Artificial Intelligence with the hope that the knowledge would help in solving some real-world problems.
Course Outline
- Introduction to artificial intelligence
- understanding natural languages
- knowledge representation
- expert systems
- pattern recognition, and
- the language LISP.
Course Objectives
After completing the course successfully, the student should be able to:
- Explain the fundamentals of artificial intelligence
- Understand the natural languages
- Describe the tasks and problems with natural languages
- Discuss the various types of evaluating natural languages
- Describe the history of knowledge representation and reasoning
- List some Characteristics of knowledge representation
- List terminology and perspectives of knowledge representation
- Explain an Expert System
- Distinction between expert systems and traditional problem-solving programs
- Explain the term ―Knowledge Base
- Explain the word Robotics
- Describe the history of Robotics
- Understand Pattern Recognition
- Understand LISP Programming Language
- Discuss the future direction of AI
- Conduct and carry out a research paper about the course.
Assessment
- Class Attendance, Participation and Discussion: 5 marks
- Course Research Paper: 20 marks
- Mid-Semester: 5 marks
- Final: 70 marks
- Instructor: Kayode Oladapo
- Education: Ph. D. in Computer Science
- Email: oladapoka@mcu.edu.ng
Main Course
- Week 1: Introductory Class
- Week 2: Introduction to artificial intelligence
- Week 3 - 4: Understanding natural languages
- Week 5: Understanding natural languages
- Week 6 - 7: Knowledge representation
- Week 8: Course Research Paper / Mid Semester Test
- Week 9- 10: Expert Systems
- Week 11 – 12: Pattern Recognition
- Week 13 – 14: The Language LISP
- Week 15 - 16: The Language LISP
- Week 17: Revision
- Week 18 - 19: Examination
Week 1: Introductory Class
- Course Link and Introduction
- Course Outline
- Discussion
- Reading Materials
Week 2: Introduction to artificial intelligence
Artificial Intelligence is a branch of science which deals with helping machines find solutions to complex problems in a more human-like fashion. This generally involves borrowing characteristics from human intelligence and applying them as algorithms in a computer-friendly way. A more or less flexible or efficient approach can be taken depending on the requirements established, which influences how artificial the intelligent behaviour appears.
Different researchers have different scopes and views of artificial intelligence:
- AI is about designing systems that are as intelligent as humans.
- The idea of the Turing Test
- Logic and Laws of thoughts
- Study of Rational Agents
The idea of ‘a machine that thinks’ dates back to ancient Greece. But since the advent of electronic computing, important events and milestones in the evolution of artificial intelligence include the following:
- 1950: Alan Turing publishes Computing Machinery and Intelligence. In the paper, Turing—famous for breaking the Nazi’s ENIGMA code during WWII—proposes to answer the question ‘Can machines think?’ and introduces the Turing Test to determine if a computer can demonstrate the same intelligence (or the results of the same intelligence) as a human. The value of the Turing test has been debated ever since.
- 1956: John McCarthy coined the term ‘artificial intelligence’ at the first-ever AI conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw, and Herbert Simon created the Logic Theorist, the first-ever running AI software program.
- 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural network that ‘learned’ through trial and error. Just a year later, Marvin Minsky and Seymour Papert published a book titled Perceptrons, which became both the landmark work on neural networks and, at least for a while, an argument against future neural network research projects.
- 1980s: Neural networks which use a backpropagation algorithm to train itself become widely used in AI applications.
- 1997: IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).
- 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!
- 2015: Baidu’s Minwa supercomputer uses a special kind of deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.
- 2016: DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go player, in a five-game match. The victory is significant given the huge number of possible moves as the game progresses (over 14.5 trillion after just four moves!). Later, Google purchased DeepMind for a reported USD 400 million.
-
2023: A rise in large language models, or LLMs, such as ChatGPT, creates an enormous change in the performance of AI and its potential to drive enterprise value. With these new generative AI practices, deep-learning models can be pre-trained on vast amounts of raw, unlabeled data.
(Source: IBM, 2023)
- Goals of AI
The main goals of AI are:
- Imitating the human intelligence
- Solving high skills tasks
- Intelligent link between action and insight
- Design a machine or device that can perform jobs that require human knowledge
- Systems that exhibit intelligent behaviour and learn new things by themselves
Artificial Intelligence research during the last three decades has concluded that Intelligence requires knowledge. To compensate for overwhelming quality, knowledge possesses less desirable properties.
- It is huge.
- It is difficult to characterize correctly.
- It is constantly varying.
- It differs from data by being organized in a way that corresponds to its application.
- It is complicated.
An AI technique is a method that exploits knowledge that is represented so that:
- The knowledge captures generalizations that share properties, and are grouped together, rather than being allowed separate representation.
- It can be understood by people who must provide it—even though for many programs bulk of the data comes automatically from readings.
- In many AI domains, how people understand the same people must supply the knowledge to a program.
- It can be easily modified to correct errors and reflect changes in real conditions.
- It can be widely used even if it is incomplete or inaccurate.
- It can be used to help overcome its own sheer bulk by helping to narrow the range of possibilities that must be usually considered.
The programs increase in complexity, their use of generalizations, the clarity of their knowledge and the extensibility of their approach. In this way, they move towards being representations of AI techniques.
However, some branches are surely missing, because no one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.
AI has applications in all fields of human studies, such as finance and economics, environmental engineering, chemistry, computer science, and so on. Some of the applications of AI are listed below:
- Perception
- Machine vision
- Speech understanding
- Touch (tactile or haptic) sensation
- Robotics
- Natural Language Processing
- Natural Language Understanding
- Speech Understanding
- Language Generation
- Machine Translation
- Planning
- Expert Systems
- Machine Learning
- Theorem Proving
- Symbolic Mathematics
- Game Playing
- Customer Service
- Recommendation engines
-
Automation
- Advantages and Disadvantages
- Advantages
- Accurate with fewer errors
- Increased speed
- Highly reliable
- Used for risky tasks and situations
- Personal Digital Assistant
- Public Efficacy
- Disadvantages
- Very costly
- Can´t think creativity
- No affection and emotions
- Depending on machines
- No originality
Week 3 - 4: Understanding natural languages
- Not available - Check class discussion notes and group tasks
Week 5: Understanding natural languages
- Not available - Check class discussion notes and group tasks
Week 6 - 7: Knowledge representation
Week 8: Course Research Paper / Mid Semester Test
Week 9- 10: Expert Systems
Week 11 – 12: Pattern Recognition
Robotics Class links and documents
Week 13 – 14: The Language LISP
- Not available - Check class discussion notes and group tasks
Week 15 - 16: The Language LISP
- Not available - Check class discussion notes and group tasks
Week 17: Revision
Week 18 - 19: Examination