[SOLVED] CS Java prolog python Bayesian network discrete mathematics deep learning Bayesian Hidden Markov Mode AI algorithm decision tree flex chain c++ CS 561: Artificial Intelligence

$25

File Name: CS_Java_prolog_python_Bayesian_network_discrete_mathematics_deep_learning_Bayesian_Hidden_Markov_Mode_AI_algorithm_decision_tree_flex_chain_c++_CS_561:_Artificial_Intelligence.zip
File Size: 1648.5 KB

5/5 - (1 vote)

CS 561: Artificial Intelligence

1

CS 561: Artificial Intelligence
Instructors: Prof. Laurent Itti ([email protected])

TAs:

Lectures: Online & OHE-100B, Mon & Wed, 12:30 14:20
Office hours:Mon 14:30 16:00, HNB-07A (Prof. Itti)

This class will use courses.uscden.net (Desire2Learn, D2L)
Up to date information, lecture notes, lecture videos
Homeworks posting and submission information
Grades, relevant dates, links, etc.

Textbook: [AIMA] Artificial Intelligence: A Modern Approach, by Russell & Norvig. (3rd ed)
Optional (ALFE): Autonomous Learning from the Environment by Shen
Yunhao Ge [email protected] (50%)
Jincheng He [email protected] (50%)
Hexiang Hu [email protected] (25%)
Ang Li [email protected] (50%)
Daniel Link [email protected] (50%)
Setareh Nasihati Gilani [email protected] (25%)
Gozde Sahin [email protected] (50%)

2

CS 561: Artificial Intelligence
Course overview:foundations of symbolic intelligent systems. Agents, search, problem solving, logic, representation, reasoning, symbolic programming, and robotics.

Prerequisites: CS 455x, i.e., programming principles, discrete mathematics for computing, software design and software engineering concepts.Good knowledge of C++ and STL, or Java, or Python needed for programming assignments.

Grading:20% for midterm-1 +
20% for midterm-2 +
30% for final +
30% for 3 mandatory homeworks/assignments

2

3

CS 561: Artificial Intelligence
Grading:

Grading is absolute and according to the following scale:

>= 90 A+ (honorary shows as A on transcript)
>= 80 A
>= 75 A-
>= 70 B+
>= 60 B
>= 55 B-
>= 50 C+
>= 40 C
>= 35 C-
< 35 F34Practical issuesClass mailing list: will be setup on the D2L system and Piazza.Homeworks: See class web page on D2L. Homeworks are programming assignments.Jan 27 HW1 outTopic: searchFeb 22 HW1 dueFeb 24 HW2 outTopic: game playing or constraint satisfactionMar 17 HW2 dueMar 22 HW3 outTopic: logic reasoning and inference or neural networksApr 26 HW3 dueLate homeworks: you lose 20% of the homeworks grade per 24-hour period that you are late. Beware, the penalty grows very fast: grade = points * (1 n * 0.2) where n is the number of days late (n=0 if submitted on time, n=1 if submitted between 1 second and 24h late, etc).Homework grading: your hws will be graded by an A.I. agent (given to you in advance for testing) through the online system at vocareum.com.Grade review / adjustment: Requests will be considered up to 2 weeks after the grade is released. After that, it will be too late and requests for grading review will be denied.Exams:Midterm 1: Monday March 1, 12:30 2:00pmMidterm 2: Wednesday March 31, 12:30 2:00pmFinal exam: Friday May 7, 11:00am 1:00pm56789More on homeworks and gradingIn each homework you will implement some algorithms from scratch.But our goal is to focus on A.I. algorithms, not on low-level programming. Hence I recommend C++/STL so that you can use the STL containers (queue, map, etc) instead of pointers and memory management. But the language you use is up to you.Code editing, compiling, testing: we will use www.vocareum.com which will be linked to desire2learn (this is in progress at this time).Vocareum supports several languages, the choice of which you use is up to you: C, C++, C++11, Java, Python, etc.Your program should take no command-line arguments. It should read a text file called input.txt that contains a problem definition. It should write a file output.txt with your solution. For each homework, format for files input.txt and output.txt will be specified and examples will be given to you.The grading will, 50 times:Create an input.txt fileRun your codeCompare output.txt created by your program with the correct one.If your outputs for all 50 test cases are correct, you get 100 points.Otherwise, you get 100 4xN points where N is the number of failed test cases, or 0 if failed more than 25 cases.Vocareum.com10gcc – 5.5valgrind – 3.11ar – 2.26.1php – 7.0.32python2 – 2.7.12python3 – 3.5.2, 3.6.4Java 1.8 – 1.8.0_191perl – 5.22.1make – 4.111TA office hoursYunhao Ge – [email protected] (50%)Wednesdays 4-6pmJincheng He – [email protected] (50%)Tuesdays 2-4pmHexiang Hu – [email protected] (25%)Fridays 4-6pmAng Li – [email protected] (50%)Thursdays 4-6pmDaniel Link – [email protected] (50%)Mondays 3-5pmSetareh Nasihati Gilani – [email protected] (25%)Tuesdays 10am-12pmGozde Sahin – [email protected] (50%)Tuesdays 12-2pmAcademic IntegrityFamiliarize yourself with the USC Academic Integrity guidelines. Violations of the Student Conduct Code will be filed with the Office of Student Judicial Affairs, and appropriate sanctions will be given.Homework assignments are to be solved individually.You are welcome to discuss class material in review groups, but do not discuss how to solve the homeworks.Exams are closed-book with no questions allowed.Please read and understand:http://policy.usc.edu/student/scampus/https://sjacs.usc.edu/students/Academic IntegrityAll students are responsible for reading and following the Student Conduct Code. Note that the USC Student Conduct Code prohibits plagiarism.Some examples of what is not allowed by the conduct code: copying all or part of someone else’s work (by hand or by looking at others’ files, either secretly or if shown), and submitting it as your own; giving another student in the class a copy of your assignment solution; and consulting with another student during an exam. If you have questions about what is allowed, please discuss it with the instructor.Students who violate university standards of academic integrity are subject to disciplinary sanctions, including failure in the course and suspension from the university. Since dishonesty in any form harms the individual, other students, and the university, policies on academic integrity will be strictly enforced. Violations of the Student Conduct Code will be filed with the Office of Student Judicial Affairs.14PiazzaWe will use piazza.com for questions and answers related to class material.Please register here: http://piazza.com/usc/spring2021/csci561Guidelines:You may ask any question related to material covered in lectures, discussions, or exams.You may ask clarification questions only related to the homework definitions.You may not ask for advice on how to solve some aspect of a homework problem.You may not post code snippets related to homework problems.You may not post test cases or input/output examples related to homework problems.As a general rule, remember that homeworks are to be solved strictly individually.17Why study AI?Search enginesLaborScienceMedicine /DiagnosisAppliances /Internet of Things (IoT)What else?Why study AI?Why study AI?DARPA Robotics Challenge202122Wearable computingZypadGoogle glassMicrosoft Hololens23What is AI?The exciting new effort to make computers think machines with minds, in the full and literal sense (Haugeland 1985)The art of creating machines that perform functions that require intelligence when performed by people (Kurzweil, 1990)The study of mental faculties through the use of computational models (Charniak et al. 1985)A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes (Schalkol, 1990)Systems that think like humansSystems that think rationallySystems that act like humansSystems that act rationally24Acting Rationally: The Rational AgentRational behavior: Doing the right thing!The right thing: That which is expected to maximize the expected returnProvides the most general view of AI because it includes: Correct inference (Laws of thought)Uncertainty handling Resource limitation considerations (e.g., reflex vs. deliberation)Cognitive skills (NLP, AR, knowledge representation, ML, etc.)Advantages: More general Its goal of rationality is well defined25Acting Humanly: The Turing TestAlan Turing’s 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligentCan machines think? Can machines behave intelligently?The Turing test (The Imitation Game): Operational definition of intelligence.26Acting Humanly: The Turing TestComputer needs to possess: Natural language processing, Knowledge representation, Automated reasoning, and Machine learningAre there any problems/limitations to the Turing Test?27Acting Humanly: The Full Turing TestAlan Turing’s 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligentCan machines think? Can machines behave intelligently?The Turing test (The Imitation Game): Operational definition of intelligence.Computer needs to possess: Natural language processing, Knowledge representation, Automated reasoning, and Machine learningProblem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment?Total Turing Test: Requires physical interaction and needs perception and actuation. 28Acting Humanly: The Full Turing TestTrap doorProblem: 1) Turing test is not reproducible, constructive, and amenable to mathematic analysis. 2) What about physical interaction with interrogator and environment?29What would a computer need to pass the Turing test?Natural language processing: to communicate with examiner.Knowledge representation: to store and retrieve information provided before or during interrogation. Automated reasoning: to use the stored information to answer questions and to draw new conclusions. Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.30What would a computer need to pass the Turing test?Natural language processing: to communicate with examiner.Knowledge representation: to store and retrieve information provided before or during interrogation. Automated reasoning: to use the stored information to answer questions and to draw new conclusions. Machine learning: to adapt to new circumstances and to detect and extrapolate patterns.Core focus in this course31What would a computer need to pass the Turing test?Vision (for Total Turing test): to recognize the examiners actions and various objects presented by the examiner. Motor control (total test): to act upon objects as requested. Other senses (total test): such as audition, smell, touch, etc.32CAPTCHAs or reverse Turing testsVision is a particularly difficult one for machinesGave rise to Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) wikipedia33AI Prehistory3334AI History35AI State of the artHave the following been achieved by AI?Pass the Turing testWorld-class chess or go playingPlaying table tennisAutonomous cross-country drivingSolving mathematical problemsDiscovering and proving mathematical theoriesEngaging in a meaningful conversationUnderstanding spoken languageObserving and understanding human emotionsExpressing emotions36AI State of the art3738AI State of the art39AI State of the art40AI State of the art41AI State of the art42General Introduction01-Introduction. [AIMA Ch 1] Course Schedule. Homeworks, exams and grading. Course material, TAs and office hours. Why study AI? What is AI? The Turing test. Rationality. Branches of AI. Research disciplines connected to and at the foundation of AI. Brief history of AI. Challenges for the future. Overview of class syllabus. Intelligent Agents. [AIMA Ch 2] What is an intelligent agent? Examples. Doing the right thing (rational action). Performance measure. Autonomy. Environment and agent design. Structure of agents. Agent types. Reflex agents.Reactive agents. Reflex agents with state. Goal-based agents. Utility-based agents. Mobile agents. Information agents. Course OverviewsensorseffectorsAgent43Course Overview (cont.)02-Problem solving and search. [AIMA Ch 3] Example: measuring problem. Types of problems. More example problems. Basic idea behind search algorithms. Complexity. Combinatorial explosion and NP completeness. Polynomial hierarchy. 03-Uninformed search. [AIMA Ch 3] Depth-first. Breadth-first. Uniform-cost. Depth-limited. Iterative deepening. Examples. Properties. 04/05-Informed search. [AIMA Ch 4] Best-first. A* search. Heuristics. Hill climbing. Problem of local extrema. Simulated annealing. Genetic algorithms.3 l5 l9 lUsing these 3 buckets,measure 7 liters of water.Traveling salesperson problemHow can we solve complex problems?44Course Overview (cont.)Practical applications of search.06/07-Game playing. [AIMA Ch 5] The minimax algorithm. Resource limitations. Aplha-beta pruning. Elements ofchance and non-deterministic games.tic-tac-toe4445Course Overview (cont.)Search under constraints08-Constraint satisfaction. [AIMA Ch 6] Node, arc, path, and k-consistency. Backtracking search. Local search using min-conflicts.Map coloring4546Course Overview (cont.)9-Agents that reason logically 1. [AIMA Ch 7] Knowledge-based agents. Logic and representation. Propositional (boolean) logic. 10-Agents that reason logically 2. [AIMA Ch 7] Inference in propositional logic. Syntax. Semantics. Examples. Towards intelligent agentswumpus worldNote: room not availableWeek of Sept 1647Course Overview (cont.)Building knowledge-based agents: 1st Order Logic11-First-order logic 1. [AIMA Ch 8] Syntax. Semantics. Atomic sentences. Complex sentences. Quantifiers. Examples. FOL knowledge base. Situation calculus. 12-First-order logic 2. [AIMA Ch 8] Describing actions. Planning. Action sequences.48Course Overview (cont.)Reasoning Logically13/14-Inference in first-order logic. [AIMA Ch 9] Proofs. Unification. Generalized modus ponens. Forward and backward chaining.Example ofbackward chaining49Course Overview (cont.)Examples of Logical Reasoning Systems15-Logical reasoning systems. [AIMA Ch 9] Indexing, retrieval and unification. The Prolog language. Theorem provers. Frame systems and semantic networks. Semantic networkused in an insightgenerator (Dukeuniversity)50Course Overview (cont.)Systems that can Plan Future Behavior16-Planning. [AIMA Ch 10] Definition and goals. Basic representations for planning. Situation space and plan space. Examples. 51Course Overview (cont.)Center of largest areaCenter of gravity17-Fuzzy logic. [handout]. Fuzzy variables, fuzzy inference, aggregation, defuzzification.Handling fuzziness, change, uncertainty.52Course Overview (cont.)18-Learning from examples. [AIMA 18 + handout]. Supervised learning, learning decision trees, support vector machines.Handling fuzziness, change, uncertainty.53Course Overview (cont.)Learning with Neural networks19/20-Learning with Neural Networks. [Handout + AIMA 18] Introduction to perceptrons, Hopfield networks, self-organizing feature maps. How to size a network? What can neural networks achieve? Advanced concepts convnets, deep learning, stochastic gradient descent, dropout learning, autoencoders, applications and state of the art.54Course Overview (cont.)21/22-Probabilistic reasoning. [AIMA Ch 13, 14, 15]Reasoning under uncertainty probabilities, conditional independence, Markov blanket, Bayes nets. Probabilistic reasoning in time. Hidden Markov Models, Kalman filters, dynamic Bayesian networks.Handling fuzziness, change, uncertainty.55Course Overview (cont.)23-Probabilistic decision making. [AIMA 16, 17] utility theory, decision networks, value iteration, policy iteration, Markov decision processes (MDP), partially-observable MDP (POMDP).Handling fuzziness, change, uncertainty.56Course Overview (cont.)24-Probabilistic reasoning over time. [AIMA 15]Temporal models, Hidden Markov Models, Kalman filters, Dynamic Bayesian Networks, Automata theory.Handling fuzziness, change, uncertainty.57Course Overview (cont.)25-Probability-based learning. [AIMA 20-21]Probabilistic Models, Nave Bayes Models, EM algorithm, Reinforcement Learning.Handling fuzziness, change, uncertainty.58Course Overview (cont.)What challenges remain?Towards intelligent machines. [AIMA Ch 26, 27] The challenge of robots: with what we have learned, what hard problems remain to be solved? Different types of robots. Tasks that robots are for. Parts of robots. Architectures. Configuration spaces. Navigation and motion planning. Towards highly-capable robots. What have we learned. Where do we go from here? robotics@USC59Defining intelligent agentsIntelligent Agents (IA)Environment typesIA BehaviorIA StructureIA Types60What is an (Intelligent) Agent?An over-used, over-loaded, and misused term.Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals.61What is an (Intelligent) Agent?PAGE (Percepts, Actions, Goals, Environment)Task-specific & specialized: well-defined goals and environmentThe notion of an agent is meant to be a tool for analyzing systems, It is not a different hardware or new programming languages62Rational AgentsEnvironmentAgentperceptsactions?SensorsEffectorsHow to design this?63A Windshield Wiper AgentHow do we design a agent that can wipe the windshields when needed?Goals? Percepts?Sensors?Effectors?Actions?Environment?64A Windshield Wiper Agent (Contd)Goals:Keep windshields clean & maintain visibilityPercepts:Raining, DirtySensors:Camera (moist sensor)Effectors:Wipers (left, right, back)Actions:Off, Slow, Medium, FastEnvironment:Inner city, freeways, highways, weather 65Interacting AgentsCollision Avoidance Agent (CAA)Goals:Avoid running into obstaclesPercepts ?Sensors?Effectors ?Actions ?Environment:FreewayLane Keeping Agent (LKA)Goals:Stay in current lanePercepts ?Sensors?Effectors ?Actions ?Environment:Freeway66Interacting AgentsCollision Avoidance Agent (CAA)Goals: Avoid running into obstaclesPercepts:Obstacle distance, velocity, trajectorySensors: Vision, proximity sensingEffectors:Steering Wheel, Accelerator, Brakes, Horn, HeadlightsActions:Steer, speed up, brake, blow horn, signal (headlights)Environment:Freeway Lane Keeping Agent (LKA)Goals: Stay in current lanePercepts:Lane center, lane boundariesSensors: VisionEffectors:Steering Wheel, Accelerator, BrakesActions:Steer, speed up, brakeEnvironment:Freeway67Conflict Resolution by Action Selection AgentsOverride: CAA overrides LKAArbitrate: if Obstacle is Close then CAAelse LKACompromise: Choose action that satisfies bothagentsAny combination of the aboveChallenges:Doing the right thing68The Right Thing = The Rational ActionRational Action: The action that maximizes the expected value of the performance measure given the percept sequence to dateRational = Best ?Rational = Optimal ?Rational = Omniscience ? Rational = Clairvoyant ?Rational = Successful ?69The Right Thing = The Rational ActionRational Action: The action that maximizes the expected value of the performance measure given the percept sequence to dateRational = BestYes, to the best of its knowledgeRational = OptimalYes, to the best of its abilities (incl. its constraints)Rational OmniscienceRational Clairvoyant Rational Successful 70Behavior and performance of IAsPerception (sequence) to Action Mapping: f : P* AIdeal mapping: specifies which actions an agent ought to take at any point in timeImplementation: Look-Up-Table, Closed Form, Algorithm, etc. Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc.)(degree of) Autonomy: to what extent is the agent able to make decisions and take actions on its own?71Look up tableagentobstaclesensorDistanceAction10No action5Turn left 30 degrees2Stop72Closed formOutput (degree of rotation) = F(distance)E.g., F(d) = 10/d (distance cannot be less than 1/10)73How is an Agent different from other software?Agents are autonomous, that is, they act on behalf of the user Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environmentAgents don’t only act reactively, but sometimes also proactivelyAgents have social ability, that is, they communicate with the user, the system, and other agents as requiredAgents may also cooperate with other agents to carry out more complex tasks than they themselves can handle Agents may migrate from one system to another to access remote resources or even to meet other agents 74Environment TypesCharacteristicsAccessible vs. inaccessibleDeterministic vs. nondeterministicEpisodic vs. nonepisodicHostile vs. friendlyStatic vs. dynamicDiscrete vs. continuous 75Environment TypesCharacteristicsAccessible vs. inaccessibleAccessible: sensors give access to complete state of the environment.Deterministic vs. nondeterministicDeterministic: the next state can be determined based on the current state and the action.Episodic vs. nonepisodic(Sequential)Episode: each perceive and action pairsIn episodic environments, the quality of action does not depend on the previous episode and does not affect the next episode.Example: Episodic: mail sorting system; non-episodic: chess76Environment TypesCharacteristicsHostile vs. friendlyStatic vs. dynamicDynamic if the environment changes during deliberationDiscrete vs. continuous Chess vs. driving77Environment typesEnvironmentAccessibleDeterministicEpisodicStaticDiscreteOperating SystemVirtualRealityOffice EnvironmentMars78Environment typesEnvironmentAccessibleDeterministicEpisodicStaticDiscreteOperating SystemYesYesNoNoYesVirtualRealityYesYesYes/noNoYes/noOffice EnvironmentNoNoNoNoNoMarsNoSemiNoSemiNoThe environment types largely determine the agent design.79Structure of Intelligent AgentsAgent = architecture + programAgent program: the implementation of f : P* A, the agents perception-action mappingfunction Skeleton-Agent(Percept) returns Actionmemory UpdateMemory(memory, Percept)Action ChooseBestAction(memory)memory UpdateMemory(memory, Action)return ActionArchitecture: a device that can execute the agent program (e.g., general-purpose computer, specialized device, beobot, etc.)80Using a look-up-table to encode f : P* AExample: Collision AvoidanceSensors:3 proximity sensorsEffectors:Steering Wheel, BrakesHow to generate?How large?How to select action?agentobstaclesensors81Using a look-up-table to encode f : P* AExample: Collision AvoidanceSensors:3 proximity sensors Effectors:Steering Wheel, BrakesHow to generate: for each p Pl Pm Prgenerate an appropriate action, a S BHow large: size of table = # possible percepts times # possible actions = |Pl | |Pm| |Pr| |S| |B|E.g., P = {close, medium, far}3A = {left, straight, right} {on, off}then size of table = 27 rowsTotal possible combinations (ways to fill table): 27*3*2=162How to select action? Search.agentobstaclesensors82Agent typesReflex agentsReflex agents with internal statesGoal-based agentsUtility-based agentsLearning agents83Agent typesReflex agents Reactive: No memoryReflex agents with internal statesW/o previous state, may not be able to make decision E.g. brake lights at night.Goal-based agentsGoal information needed to make decision84Agent typesUtility-based agentsHow well can the goal be achieved (degree of happiness)What to do if there are conflicting goals?Speed and safetyWhich goal should be selected if several can be achieved?Learning agentsHow can I adapt to the environment?How can I learn from my mistakes?85Reflex agents86Reactive agentsReactive agents do not have internal symbolic models. Act by stimulus-response to the current state of the environment. Each reactive agent is simple and interacts with others in a basic way. Complex patterns of behavior emerge from their interaction. Benefits: robustness, fast response time Challenges: scalability, how intelligent? and how do you debug them?87Reflex agents w/ state88Goal-based agents89Utility-based agents90Learning agents91Critic: Determines outcomes ofactions and gives feedbackProblem generator: creates new experiencesto promote learningLearning element:Takes feedback fromcritic and improvesperformance elementSummary on intelligent agentsIntelligent Agents:Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals.PAGE (Percepts, Actions, Goals, Environment)Described as a Perception (sequence) to Action Mapping: f : P* AUsing look-up-table, closed form, etc.Agent Types: Reflex, state-based, goal-based, utility-based, learningRational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date92

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS Java prolog python Bayesian network discrete mathematics deep learning Bayesian Hidden Markov Mode AI algorithm decision tree flex chain c++ CS 561: Artificial Intelligence
$25