CS50-raw / WbzNRTTrX0g.json
cutehusky's picture
Upload folder using huggingface_hub
0791e78 verified
[
{
"text": "[MUSIC]",
"start": 0.0,
"duration": 17.0
},
{
"text": "BRIAN YU: All right.",
"start": 17.87,
"duration": 0.84
},
{
"text": "Welcome, everyone, to an Introduction\nto Artificial Intelligence with Python.",
"start": 18.71,
"duration": 3.57
},
{
"text": "My name is Brian Yu.",
"start": 22.28,
"duration": 1.2
},
{
"text": "And in this class, we'll explore\nsome of the ideas, and techniques,",
"start": 23.48,
"duration": 3.06
},
{
"text": "and algorithms that are at the\nfoundation of artificial intelligence.",
"start": 26.54,
"duration": 3.91
},
{
"text": "Now, artificial intelligence covers a\nwide variety of types of techniques.",
"start": 30.45,
"duration": 3.71
},
{
"text": "Anytime you see a\ncomputer do something that",
"start": 34.16,
"duration": 1.95
},
{
"text": "appears to be intelligent\nor rational in some way,",
"start": 36.11,
"duration": 3.18
},
{
"text": "like recognizing\nsomeone's face in a photo,",
"start": 39.29,
"duration": 2.22
},
{
"text": "or being able to play a\ngame better than people can,",
"start": 41.51,
"duration": 2.52
},
{
"text": "or being able to understand human\nlanguage when we talk to our phones",
"start": 44.03,
"duration": 3.09
},
{
"text": "and they understand what we mean\nand are able to respond back to us,",
"start": 47.12,
"duration": 3.25
},
{
"text": "these are all examples of AI,\nor artificial intelligence.",
"start": 50.37,
"duration": 3.69
},
{
"text": "And in this class we'll explore some of\nthe ideas that make that AI possible.",
"start": 54.06,
"duration": 4.52
},
{
"text": "So we'll begin our\nconversations with search.",
"start": 58.58,
"duration": 2.21
},
{
"text": "The problem of, we\nhave an AI and we would",
"start": 60.79,
"duration": 1.75
},
{
"text": "like the AI to be able to search for\nsolutions to some kind of problem,",
"start": 62.54,
"duration": 3.61
},
{
"text": "no matter what that problem might be.",
"start": 66.15,
"duration": 1.55
},
{
"text": "Whether it's trying to get driving\ndirections from point A to point B,",
"start": 67.7,
"duration": 3.6
},
{
"text": "or trying to figure\nout how to play a game,",
"start": 71.3,
"duration": 1.8
},
{
"text": "giving a tic-tac-toe game, for\nexample, figuring out what move",
"start": 73.1,
"duration": 3.48
},
{
"text": "it ought to make.",
"start": 76.58,
"duration": 1.38
},
{
"text": "After that, we'll take\na look at knowledge.",
"start": 77.96,
"duration": 2.37
},
{
"text": "Ideally, we want our AI to\nbe able to know information,",
"start": 80.33,
"duration": 3.15
},
{
"text": "to be able to represent\nthat information,",
"start": 83.48,
"duration": 1.92
},
{
"text": "and more importantly, to be able to\ndraw inferences from that information.",
"start": 85.4,
"duration": 3.18
},
{
"text": "To be able to use the information it\nknows and draw additional conclusions.",
"start": 88.58,
"duration": 4.18
},
{
"text": "So we'll talk about how AI can be\nprogrammed in order to do just that.",
"start": 92.76,
"duration": 4.37
},
{
"text": "Then we'll explore the\ntopic of uncertainty.",
"start": 97.13,
"duration": 2.22
},
{
"text": "Talking about ideas of, what happens\nif a computer isn't sure about a fact",
"start": 99.35,
"duration": 3.75
},
{
"text": "but maybe is only sure\nwith a certain probability?",
"start": 103.1,
"duration": 2.85
},
{
"text": "So we'll talk about some of\nthe ideas behind probability",
"start": 105.95,
"duration": 2.55
},
{
"text": "and how computers can begin\nto deal with uncertain events",
"start": 108.5,
"duration": 3.03
},
{
"text": "in order to be a little bit more\nintelligent in that sense, as well.",
"start": 111.53,
"duration": 4.45
},
{
"text": "After that, we'll turn our\nattention to optimization.",
"start": 115.98,
"duration": 2.57
},
{
"text": "Problems of when the computer is trying\nto optimize for some sort of goal,",
"start": 118.55,
"duration": 3.42
},
{
"text": "especially in a situation\nwhere there might",
"start": 121.97,
"duration": 1.8
},
{
"text": "be multiple ways that a\ncomputer might solve a problem,",
"start": 123.77,
"duration": 2.61
},
{
"text": "but we're looking for a better\nway or, potentially, the best way",
"start": 126.38,
"duration": 3.39
},
{
"text": "if that's at all possible.",
"start": 129.77,
"duration": 1.78
},
{
"text": "Then we'll take a look at machine\nlearning, or learning more generally.",
"start": 131.55,
"duration": 2.96
},
{
"text": "In looking at how when\nwe have access to data",
"start": 134.51,
"duration": 2.19
},
{
"text": "our computers can be programmed to be\nquite intelligent by learning from data",
"start": 136.7,
"duration": 3.6
},
{
"text": "and learning from experience, being\nable to perform a task better and better",
"start": 140.3,
"duration": 3.51
},
{
"text": "based on greater access to data.",
"start": 143.81,
"duration": 1.96
},
{
"text": "So your email, for example,\nwhere your email inbox somehow",
"start": 145.77,
"duration": 2.75
},
{
"text": "knows which of your emails are good\nemails and whichever emails are spam.",
"start": 148.52,
"duration": 3.66
},
{
"text": "These are all examples\nof computers being",
"start": 152.18,
"duration": 2.25
},
{
"text": "able to learn from past\nexperiences and past data.",
"start": 154.43,
"duration": 3.99
},
{
"text": "We'll take a look, too, at how\ncomputers are able to draw inspiration",
"start": 158.42,
"duration": 3.27
},
{
"text": "from human intelligence, looking\nat the structure of the human brain",
"start": 161.69,
"duration": 3.24
},
{
"text": "and how neural networks can be a\ncomputer analog to that sort of idea.",
"start": 164.93,
"duration": 3.85
},
{
"text": "And how, by taking advantage of a\ncertain type of structure of a computer",
"start": 168.78,
"duration": 3.05
},
{
"text": "program, we can write\nneural networks that",
"start": 171.83,
"duration": 2.1
},
{
"text": "are able to perform tasks\nvery, very effectively.",
"start": 173.93,
"duration": 3.13
},
{
"text": "And then finally, we'll turn\nour attention to language.",
"start": 177.06,
"duration": 2.36
},
{
"text": "Not programming languages, but human\nlanguages that we speak every day.",
"start": 179.42,
"duration": 3.4
},
{
"text": "And taking a look at\nthe challenges that come",
"start": 182.82,
"duration": 1.88
},
{
"text": "about as a computer tries to\nunderstand natural language",
"start": 184.7,
"duration": 3.18
},
{
"text": "and how it is some of\nthe natural language",
"start": 187.88,
"duration": 2.1
},
{
"text": "processing that occurs in\nmodern artificial intelligence",
"start": 189.98,
"duration": 2.88
},
{
"text": "can actually work.",
"start": 192.86,
"duration": 2.13
},
{
"text": "But today it will begin our\nconversation with search.",
"start": 194.99,
"duration": 2.64
},
{
"text": "This problem of trying\nto figure out what",
"start": 197.63,
"duration": 1.92
},
{
"text": "to do when we have some sort of\nsituation that the computer is in,",
"start": 199.55,
"duration": 3.24
},
{
"text": "some sort of environment that\nan agent is in, so to speak.",
"start": 202.79,
"duration": 3.09
},
{
"text": "And we would like for that agent to\nbe able to somehow look for a solution",
"start": 205.88,
"duration": 3.84
},
{
"text": "to that problem.",
"start": 209.72,
"duration": 1.48
},
{
"text": "Now, these problems can come in any\nnumber of different types of formats.",
"start": 211.2,
"duration": 3.05
},
{
"text": "One example, for instance, might\nbe something like this classic 15",
"start": 214.25,
"duration": 2.84
},
{
"text": "puzzle with the sliding\ntiles that you might",
"start": 217.09,
"duration": 1.84
},
{
"text": "have seen, where you're trying to\nslide the tiles in order to make sure",
"start": 218.93,
"duration": 3.0
},
{
"text": "that all the numbers line up in order.",
"start": 221.93,
"duration": 1.8
},
{
"text": "This is an example of what you\nmight call a search problem.",
"start": 223.73,
"duration": 2.97
},
{
"text": "The 15 puzzle begins in an\ninitially mixed up state",
"start": 226.7,
"duration": 3.66
},
{
"text": "and we need some way of finding\nmoves to make in order to return",
"start": 230.36,
"duration": 3.12
},
{
"text": "the puzzle to its solved state.",
"start": 233.48,
"duration": 2.04
},
{
"text": "But there are similar problems\nthat you can frame in other ways.",
"start": 235.52,
"duration": 2.67
},
{
"text": "Trying to find your way\nthrough a maze, for example,",
"start": 238.19,
"duration": 2.2
},
{
"text": "is another example of a search problem.",
"start": 240.39,
"duration": 1.76
},
{
"text": "You begin in one place, you have some\ngoal of where you're trying to get to,",
"start": 242.15,
"duration": 3.57
},
{
"text": "and you need to figure out the correct\nsequence of actions that will take you",
"start": 245.72,
"duration": 3.21
},
{
"text": "from that initial state to the goal.",
"start": 248.93,
"duration": 2.53
},
{
"text": "And while this is a little\nbit abstract, anytime",
"start": 251.46,
"duration": 2.0
},
{
"text": "we talk about maze\nsolving in this class,",
"start": 253.46,
"duration": 1.98
},
{
"text": "you can translate it to something\na little more real world,",
"start": 255.44,
"duration": 2.55
},
{
"text": "something like driving directions.",
"start": 257.99,
"duration": 1.83
},
{
"text": "If you ever wonder how Google\nMaps is able to figure out what",
"start": 259.82,
"duration": 2.7
},
{
"text": "is the best way for you to\nget from point A to point B",
"start": 262.52,
"duration": 3.03
},
{
"text": "and what turns to make, at what time,\ndepending on traffic, for example.",
"start": 265.55,
"duration": 3.31
},
{
"text": "It's often some sort\nof search algorithm.",
"start": 268.86,
"duration": 2.39
},
{
"text": "You have an AI that is trying\nto get from an initial position",
"start": 271.25,
"duration": 3.12
},
{
"text": "to some sort of goal by taking\nsome sequence of actions.",
"start": 274.37,
"duration": 3.69
},
{
"text": "So we'll start our\nconversations today by thinking",
"start": 278.06,
"duration": 2.58
},
{
"text": "about these types of\nsearch problems and what",
"start": 280.64,
"duration": 1.95
},
{
"text": "goes in to solving a search problem\nlike this in order for an AI",
"start": 282.59,
"duration": 3.66
},
{
"text": "to be able to find a good solution.",
"start": 286.25,
"duration": 2.04
},
{
"text": "In order to do so, though,\nwe're going to need",
"start": 288.29,
"duration": 1.92
},
{
"text": "to introduce a little bit of\nterminology, some of which",
"start": 290.21,
"duration": 2.37
},
{
"text": "I've already used.",
"start": 292.58,
"duration": 1.11
},
{
"text": "But the first time we'll need\nto think about is an agent.",
"start": 293.69,
"duration": 3.04
},
{
"text": "An agent is just some entity\nthat perceives its environment,",
"start": 296.73,
"duration": 3.22
},
{
"text": "it somehow is able to\nperceive the things around it,",
"start": 299.95,
"duration": 2.17
},
{
"text": "and act on that environment in some way.",
"start": 302.12,
"duration": 2.41
},
{
"text": "So in the case of the\ndriving directions,",
"start": 304.53,
"duration": 1.71
},
{
"text": "your agent might be some\nrepresentation of a car that",
"start": 306.24,
"duration": 2.72
},
{
"text": "is trying to figure out what\nactions to take in order",
"start": 308.96,
"duration": 2.22
},
{
"text": "to arrive at a destination.",
"start": 311.18,
"duration": 1.56
},
{
"text": "In the case of the 15 puzzle\nwith the sliding tiles,",
"start": 312.74,
"duration": 2.7
},
{
"text": "the agent might be the AI or the person\nthat is trying to solve that puzzle,",
"start": 315.44,
"duration": 3.81
},
{
"text": "trying to figure out what tiles to\nmove in order to get to that solution.",
"start": 319.25,
"duration": 4.84
},
{
"text": "Next, we introduce the idea of a state.",
"start": 324.09,
"duration": 2.59
},
{
"text": "A state is just some configuration\nof the agent in its environment.",
"start": 326.68,
"duration": 4.46
},
{
"text": "So in the 15 puzzle, for example, any\nstate might be any one of these three",
"start": 331.14,
"duration": 3.74
},
{
"text": "for example.",
"start": 334.88,
"duration": 0.69
},
{
"text": "A state is just some\nconfiguration of the tiles.",
"start": 335.57,
"duration": 3.0
},
{
"text": "Each of these states is\ndifferent and is going",
"start": 338.57,
"duration": 1.92
},
{
"text": "to require a slightly\ndifferent solution.",
"start": 340.49,
"duration": 2.34
},
{
"text": "A different sequence of actions will\nbe needed in each one of these in order",
"start": 342.83,
"duration": 3.57
},
{
"text": "to get from this initial\nstate to the goal, which",
"start": 346.4,
"duration": 3.33
},
{
"text": "is where we're trying to get.",
"start": 349.73,
"duration": 1.77
},
{
"text": "The initial state then.",
"start": 351.5,
"duration": 0.96
},
{
"text": "What is that?",
"start": 352.46,
"duration": 0.75
},
{
"text": "The initial state is just the\nstate where the agent begins.",
"start": 353.21,
"duration": 2.88
},
{
"text": "It is one such state where\nwe're going to start from",
"start": 356.09,
"duration": 2.79
},
{
"text": "and this is going to be the starting\npoint for our search algorithm,",
"start": 358.88,
"duration": 3.13
},
{
"text": "so to speak.",
"start": 362.01,
"duration": 0.68
},
{
"text": "We're going to begin\nwith this initial state",
"start": 362.69,
"duration": 2.02
},
{
"text": "and then start to reason about it, to\nthink about what actions might we apply",
"start": 364.71,
"duration": 3.47
},
{
"text": "to that initial state in order to\nfigure out how to get from the beginning",
"start": 368.18,
"duration": 3.69
},
{
"text": "to the end, from the initial position\nto whatever our goal happens to be.",
"start": 371.87,
"duration": 4.98
},
{
"text": "And how do we make our way from\nthat initial position to the goal?",
"start": 376.85,
"duration": 2.82
},
{
"text": "Well ultimately, it's\nvia taking actions.",
"start": 379.67,
"duration": 2.43
},
{
"text": "Actions are just choices that\nwe can make in any given state.",
"start": 382.1,
"duration": 3.47
},
{
"text": "And in AI, we're always going to try\nto formalize these ideas a little bit",
"start": 385.57,
"duration": 3.61
},
{
"text": "more precisely such that we could\nprogram them a little bit more",
"start": 389.18,
"duration": 2.67
},
{
"text": "mathematically, so to speak.",
"start": 391.85,
"duration": 1.59
},
{
"text": "So this will be a recurring theme and\nwe can more precisely define actions",
"start": 393.44,
"duration": 4.14
},
{
"text": "as a function.",
"start": 397.58,
"duration": 1.26
},
{
"text": "We're going to effectively\ndefine a function called actions",
"start": 398.84,
"duration": 2.73
},
{
"text": "that takes an input S, where S is going\nto be some state that exists inside",
"start": 401.57,
"duration": 4.92
},
{
"text": "of our environment, and actions of S\nis going to take the state as input",
"start": 406.49,
"duration": 4.23
},
{
"text": "and return as output the\nset of all actions that",
"start": 410.72,
"duration": 3.42
},
{
"text": "can be executed in that state.",
"start": 414.14,
"duration": 2.4
},
{
"text": "And so it's possible that some actions\nare only valid in certain states",
"start": 416.54,
"duration": 3.63
},
{
"text": "and not in other states.",
"start": 420.17,
"duration": 1.47
},
{
"text": "And we'll see examples\nof that soon, too.",
"start": 421.64,
"duration": 2.7
},
{
"text": "So in the case of the\n15 puzzle, for example,",
"start": 424.34,
"duration": 2.19
},
{
"text": "they're generally going to\nbe four possible actions",
"start": 426.53,
"duration": 2.48
},
{
"text": "that we can do most of the time.",
"start": 429.01,
"duration": 1.75
},
{
"text": "We can slide a tile to the\nright, slide a tile to the left,",
"start": 430.76,
"duration": 2.49
},
{
"text": "slide a tile up, or slide\na tile down, for example.",
"start": 433.25,
"duration": 3.03
},
{
"text": "And those are going to be the\nactions that are available to us.",
"start": 436.28,
"duration": 3.67
},
{
"text": "So somehow our AI, our\nprogram, needs some encoding",
"start": 439.95,
"duration": 3.05
},
{
"text": "of the state, which is often going\nto be in some numerical format,",
"start": 443.0,
"duration": 3.15
},
{
"text": "and some encoding of these actions.",
"start": 446.15,
"duration": 1.9
},
{
"text": "But it also needs some encoding of\nthe relationship between these things,",
"start": 448.05,
"duration": 3.05
},
{
"text": "how do the states and actions\nrelate to one another?",
"start": 451.1,
"duration": 3.51
},
{
"text": "And in order to do that,\nwe'll introduce to our AI",
"start": 454.61,
"duration": 2.34
},
{
"text": "a transition model, which will\nbe a description of what state",
"start": 456.95,
"duration": 3.48
},
{
"text": "we get after we perform some\navailable action in some other state.",
"start": 460.43,
"duration": 4.69
},
{
"text": "And again, we can be a little\nbit more precise about this,",
"start": 465.12,
"duration": 2.42
},
{
"text": "define this transition model a\nlittle bit more formally, again,",
"start": 467.54,
"duration": 3.28
},
{
"text": "as a function.",
"start": 470.82,
"duration": 0.95
},
{
"text": "The function is going to be\na function called result,",
"start": 471.77,
"duration": 2.24
},
{
"text": "that this time takes two inputs.",
"start": 474.01,
"duration": 2.2
},
{
"text": "Input number one is S, some state.",
"start": 476.21,
"duration": 2.86
},
{
"text": "And input number two is A, some action.",
"start": 479.07,
"duration": 3.2
},
{
"text": "And the output of this\nfunction result is",
"start": 482.27,
"duration": 2.18
},
{
"text": "it is going to give us the state that we\nget after we perform action A in state",
"start": 484.45,
"duration": 5.32
},
{
"text": "S. So let's take a look at an example\nto see more precisely what this actually",
"start": 489.77,
"duration": 4.02
},
{
"text": "means.",
"start": 493.79,
"duration": 0.79
},
{
"text": "Here's an example of a state\nof the 15 puzzle, for example.",
"start": 494.58,
"duration": 3.42
},
{
"text": "And here's an example of an action,\nsliding a tile to the right.",
"start": 498.0,
"duration": 3.6
},
{
"text": "What happens if we pass these as\ninputs to the result function?",
"start": 501.6,
"duration": 3.24
},
{
"text": "Again, the result function takes this\nboard, this state, as its first input.",
"start": 504.84,
"duration": 4.43
},
{
"text": "And it takes an action\nas a second input.",
"start": 509.27,
"duration": 2.46
},
{
"text": "And of course, here, I'm\ndescribing things visually so",
"start": 511.73,
"duration": 2.37
},
{
"text": "that you can see visually what the\nstate is and what the action is.",
"start": 514.1,
"duration": 2.91
},
{
"text": "In a computer, you might\nrepresent one of these actions",
"start": 517.01,
"duration": 2.4
},
{
"text": "as just some number that\nrepresents the action.",
"start": 519.41,
"duration": 2.16
},
{
"text": "Or if you're familiar\nwith enums that allow",
"start": 521.57,
"duration": 1.8
},
{
"text": "you to enumerate multiple possibilities,\nit might be something like that.",
"start": 523.37,
"duration": 3.06
},
{
"text": "And the state might just be represented\nas an array, or two dimensional array,",
"start": 526.43,
"duration": 4.1
},
{
"text": "of all of these numbers that exist.",
"start": 530.53,
"duration": 1.87
},
{
"text": "But here we're going to show it\nvisually just so you can see it.",
"start": 532.4,
"duration": 3.14
},
{
"text": "When we take this state and this action,\npass it into the result function,",
"start": 535.54,
"duration": 3.91
},
{
"text": "the output is a new state.",
"start": 539.45,
"duration": 1.86
},
{
"text": "The state we get after we take a tile\nand slide it to the right, and this",
"start": 541.31,
"duration": 3.6
},
{
"text": "is the state we get as a result.",
"start": 544.91,
"duration": 1.59
},
{
"text": "If we had a different action and\na different state, for example,",
"start": 546.5,
"duration": 3.21
},
{
"text": "and passed that into\nthe result function,",
"start": 549.71,
"duration": 1.74
},
{
"text": "we'd get a different answer altogether.",
"start": 551.45,
"duration": 2.02
},
{
"text": "So the result function\nneeds to take care",
"start": 553.47,
"duration": 2.24
},
{
"text": "of figuring out how to take a state and\ntake an action and get what results.",
"start": 555.71,
"duration": 4.38
},
{
"text": "And this is going to be\nour transition model that",
"start": 560.09,
"duration": 2.67
},
{
"text": "describes how it is that states and\nactions are related to each other.",
"start": 562.76,
"duration": 4.69
},
{
"text": "If we take this transition model\nand think about it more generally",
"start": 567.45,
"duration": 2.84
},
{
"text": "and across the entire problem, we can\nform what we might call a state space,",
"start": 570.29,
"duration": 4.56
},
{
"text": "the set of all of the states we\ncan get from the initial state",
"start": 574.85,
"duration": 3.27
},
{
"text": "via any sequence of actions, by\ntaking zero or one or two or more",
"start": 578.12,
"duration": 3.84
},
{
"text": "actions in addition to that,\nso we could draw a diagram",
"start": 581.96,
"duration": 3.09
},
{
"text": "that looks something like this.",
"start": 585.05,
"duration": 1.35
},
{
"text": "Where every state is represented\nhere by a game board.",
"start": 586.4,
"duration": 3.24
},
{
"text": "And there are arrows that connect\nevery state to every other state we",
"start": 589.64,
"duration": 3.21
},
{
"text": "can get two from that state.",
"start": 592.85,
"duration": 1.89
},
{
"text": "And the state space is much larger\nthan what you see just here.",
"start": 594.74,
"duration": 3.06
},
{
"text": "This is just a sample of what the\nstate space might actually look like.",
"start": 597.8,
"duration": 4.12
},
{
"text": "And, in general, across\nmany search problems,",
"start": 601.92,
"duration": 2.42
},
{
"text": "whether they're this particular\n15 puzzle or driving directions",
"start": 604.34,
"duration": 2.94
},
{
"text": "or something else, the state space\nis going to look something like this.",
"start": 607.28,
"duration": 3.43
},
{
"text": "We have individual states and\narrows that are connecting them.",
"start": 610.71,
"duration": 4.4
},
{
"text": "And oftentimes, just\nfor simplicity, we'll",
"start": 615.11,
"duration": 1.89
},
{
"text": "simplify our representation\nof this entire thing",
"start": 617.0,
"duration": 2.67
},
{
"text": "as a graph, some sequence of nodes\nand edges that connect nodes.",
"start": 619.67,
"duration": 4.89
},
{
"text": "But you can think of this more abstract\nrepresentation as the exact same idea.",
"start": 624.56,
"duration": 4.08
},
{
"text": "Each of these little\ncircles, or nodes, is",
"start": 628.64,
"duration": 1.92
},
{
"text": "going to represent one of the\nstates inside of our problem.",
"start": 630.56,
"duration": 3.3
},
{
"text": "And the arrows here\nrepresent the actions",
"start": 633.86,
"duration": 2.07
},
{
"text": "that we can take in\nany particular state,",
"start": 635.93,
"duration": 2.37
},
{
"text": "taking us from one particular state\nto another state, for example.",
"start": 638.3,
"duration": 5.88
},
{
"text": "All right.",
"start": 644.18,
"duration": 0.58
},
{
"text": "So now we have this idea of nodes\nthat are representing these states,",
"start": 644.76,
"duration": 3.75
},
{
"text": "actions that can take us\nfrom one state to another,",
"start": 648.51,
"duration": 2.38
},
{
"text": "and a transition model\nthat defines what happens",
"start": 650.89,
"duration": 2.48
},
{
"text": "after we take a particular action.",
"start": 653.37,
"duration": 2.05
},
{
"text": "So the next step we\nneed to figure out is",
"start": 655.42,
"duration": 1.79
},
{
"text": "how we know when the AI is\ndone solving the problem.",
"start": 657.21,
"duration": 3.36
},
{
"text": "The AI I needs some way to\nknow when it gets to the goal,",
"start": 660.57,
"duration": 2.97
},
{
"text": "that it's found the goal.",
"start": 663.54,
"duration": 1.45
},
{
"text": "So the next thing we'll need to encode\ninto our artificial intelligence",
"start": 664.99,
"duration": 3.11
},
{
"text": "is a goal test, some way to determine\nwhether a given state is a goal state.",
"start": 668.1,
"duration": 5.28
},
{
"text": "In the case of something like driving\ndirections, it might be pretty easy.",
"start": 673.38,
"duration": 3.36
},
{
"text": "If you're in a state that\ncorresponds to whatever",
"start": 676.74,
"duration": 2.34
},
{
"text": "the user typed in as their\nintended destination, well,",
"start": 679.08,
"duration": 2.53
},
{
"text": "then you know you're in a goal state.",
"start": 681.61,
"duration": 1.55
},
{
"text": "In the 15 puzzle, it might\nbe checking the numbers",
"start": 683.16,
"duration": 2.16
},
{
"text": "to make sure they're\nall in ascending order.",
"start": 685.32,
"duration": 1.84
},
{
"text": "But the AI need some way\nto encode whether or not",
"start": 687.16,
"duration": 2.72
},
{
"text": "any state they happen\nto be in is a goal.",
"start": 689.88,
"duration": 2.46
},
{
"text": "And some problems might\nhave one goal, like a maze",
"start": 692.34,
"duration": 2.31
},
{
"text": "where you have one initial\nposition and one ending position",
"start": 694.65,
"duration": 2.61
},
{
"text": "and that's the goal.",
"start": 697.26,
"duration": 1.21
},
{
"text": "In other more complex\nproblems, you might",
"start": 698.47,
"duration": 1.85
},
{
"text": "imagine that there are multiple possible\ngoals, that there are multiple ways",
"start": 700.32,
"duration": 3.57
},
{
"text": "to solve a problem.",
"start": 703.89,
"duration": 1.18
},
{
"text": "And we might not care which\none the computer finds as",
"start": 705.07,
"duration": 2.87
},
{
"text": "long as it does find a particular goal.",
"start": 707.94,
"duration": 3.39
},
{
"text": "However, sometimes a computer doesn't\njust care about finding a goal,",
"start": 711.33,
"duration": 3.57
},
{
"text": "but finding a goal well,\nor one with a low cost.",
"start": 714.9,
"duration": 2.81
},
{
"text": "And it's for that reason\nthat the last piece",
"start": 717.71,
"duration": 1.84
},
{
"text": "of terminology that we use to\ndefine these search problems",
"start": 719.55,
"duration": 2.85
},
{
"text": "is something called a path cost.",
"start": 722.4,
"duration": 2.31
},
{
"text": "You might imagine that in the\ncase of driving directions,",
"start": 724.71,
"duration": 2.46
},
{
"text": "it would be pretty annoying if I\nsaid I wanted directions from point A",
"start": 727.17,
"duration": 3.51
},
{
"text": "to point B, and the route the Google\nMaps gave me was a long route with lots",
"start": 730.68,
"duration": 3.63
},
{
"text": "of detours that were unnecessary,\nthat took longer than it should",
"start": 734.31,
"duration": 2.91
},
{
"text": "have for me to get to that destination.",
"start": 737.22,
"duration": 2.31
},
{
"text": "And it's for that reason that when\nwe're formulating search problems,",
"start": 739.53,
"duration": 2.88
},
{
"text": "we'll often give every path some sort of\nnumerical cost, some number telling us",
"start": 742.41,
"duration": 5.07
},
{
"text": "how expensive it is to take\nthis particular option.",
"start": 747.48,
"duration": 3.25
},
{
"text": "And then tell our AI that instead\nof just finding a solution,",
"start": 750.73,
"duration": 3.8
},
{
"text": "some way of getting from the\ninitial state to the goal,",
"start": 754.53,
"duration": 2.46
},
{
"text": "we'd really like to find one that\nminimizes this path cost, that",
"start": 756.99,
"duration": 3.84
},
{
"text": "is less expensive, or takes\nless time, or minimizes",
"start": 760.83,
"duration": 3.54
},
{
"text": "some other numerical value.",
"start": 764.37,
"duration": 2.13
},
{
"text": "We can represent this graphically, if\nwe take a look at this graph again.",
"start": 766.5,
"duration": 3.27
},
{
"text": "And imagine that each of these\narrows, each of these actions",
"start": 769.77,
"duration": 2.97
},
{
"text": "that we can take from one\nstate to another state,",
"start": 772.74,
"duration": 2.82
},
{
"text": "has some sort of number\nassociated with it,",
"start": 775.56,
"duration": 2.16
},
{
"text": "that number being the path cost\nof this particular action where",
"start": 777.72,
"duration": 3.45
},
{
"text": "some of the costs for\nany particular action",
"start": 781.17,
"duration": 2.31
},
{
"text": "might be more expensive than the cost\nfor some other action, for example.",
"start": 783.48,
"duration": 4.18
},
{
"text": "Although this will only happen\nin some sorts of problems.",
"start": 787.66,
"duration": 2.46
},
{
"text": "In other problems we\ncan simplify the diagram",
"start": 790.12,
"duration": 2.42
},
{
"text": "and just assume that the cost of\nany particular action is the same.",
"start": 792.54,
"duration": 4.21
},
{
"text": "And this is probably the case\nin something like the 15 puzzle,",
"start": 796.75,
"duration": 2.69
},
{
"text": "for example, where it doesn't\nreally make a difference whether I'm",
"start": 799.44,
"duration": 2.97
},
{
"text": "moving right or moving left.",
"start": 802.41,
"duration": 1.47
},
{
"text": "The only thing that matters\nis the total number of steps",
"start": 803.88,
"duration": 3.06
},
{
"text": "that I have to take to get from point\nA to point B. And each of those steps",
"start": 806.94,
"duration": 4.71
},
{
"text": "is of equal cost.",
"start": 811.65,
"duration": 1.23
},
{
"text": "We can just assume it's a\nsome constant cost, like one.",
"start": 812.88,
"duration": 4.39
},
{
"text": "And so this now forms\nthe basis for what we",
"start": 817.27,
"duration": 2.11
},
{
"text": "might consider to be a search problem.",
"start": 819.38,
"duration": 2.34
},
{
"text": "A search problem has some sort of\ninitial state, some place where",
"start": 821.72,
"duration": 3.24
},
{
"text": "we begin, some sort of\naction that we can take",
"start": 824.96,
"duration": 2.46
},
{
"text": "or multiple actions that we\ncan take in any given state,",
"start": 827.42,
"duration": 2.88
},
{
"text": "and it has a transition\nmodel, some way of defining",
"start": 830.3,
"duration": 2.55
},
{
"text": "what happens when we go from\none state and take one action,",
"start": 832.85,
"duration": 3.72
},
{
"text": "what state do we end\nup with as a result.",
"start": 836.57,
"duration": 2.71
},
{
"text": "In addition to that, we need some\ngoal test to know whether or not",
"start": 839.28,
"duration": 3.17
},
{
"text": "we've reached a goal.",
"start": 842.45,
"duration": 1.2
},
{
"text": "And then we need a path cost function\nthat tells us for any particular path,",
"start": 843.65,
"duration": 4.29
},
{
"text": "by following some sequence of\nactions, how expensive is that path.",
"start": 847.94,
"duration": 3.75
},
{
"text": "What is its cost in\nterms of money, or time,",
"start": 851.69,
"duration": 2.67
},
{
"text": "or some other resource that we are\ntrying to minimize our usage of.",
"start": 854.36,
"duration": 4.11
},
{
"text": "The goal, ultimately, is to find a\nsolution, where a solution in this case",
"start": 858.47,
"duration": 3.9
},
{
"text": "is just some sequence of actions that\nwill take us from the initial state",
"start": 862.37,
"duration": 3.72
},
{
"text": "to the goal state.",
"start": 866.09,
"duration": 1.11
},
{
"text": "And, ideally, we'd like to find not just\nany solution, but the optimal solution,",
"start": 867.2,
"duration": 4.5
},
{
"text": "which is a solution that has\nthe lowest path cost among all",
"start": 871.7,
"duration": 3.81
},
{
"text": "of the possible solutions.",
"start": 875.51,
"duration": 1.44
},
{
"text": "And in some cases, there might\nbe multiple optimal solutions,",
"start": 876.95,
"duration": 2.64
},
{
"text": "but an optimal solution\njust means that there",
"start": 879.59,
"duration": 2.01
},
{
"text": "is no way that we could have done better\nin terms of finding that solution.",
"start": 881.6,
"duration": 4.83
},
{
"text": "So now we've defined the problem.",
"start": 886.43,
"duration": 1.44
},
{
"text": "And now we need to\nbegin to figure out how",
"start": 887.87,
"duration": 2.12
},
{
"text": "it is that we're going to solve\nthis kind of search problem.",
"start": 889.99,
"duration": 3.06
},
{
"text": "And in order to do so,\nyou'll probably imagine",
"start": 893.05,
"duration": 2.22
},
{
"text": "that our computer is going to need\nto represent a whole bunch of data",
"start": 895.27,
"duration": 3.54
},
{
"text": "about this particular problem.",
"start": 898.81,
"duration": 1.35
},
{
"text": "We need to represent data about\nwhere we are in the problem.",
"start": 900.16,
"duration": 2.92
},
{
"text": "And we might need to be considering\nmultiple different options at once.",
"start": 903.08,
"duration": 3.41
},
{
"text": "And oftentimes when we're trying to\npackage a whole bunch of data related",
"start": 906.49,
"duration": 3.36
},
{
"text": "to a state together, we'll\ndo so using a data structure",
"start": 909.85,
"duration": 3.0
},
{
"text": "that we're going to call a node.",
"start": 912.85,
"duration": 1.78
},
{
"text": "A node is a data structure\nthat is just going",
"start": 914.63,
"duration": 1.88
},
{
"text": "to keep track of a variety\nof different values,",
"start": 916.51,
"duration": 2.6
},
{
"text": "and specifically in the\ncase of a search problem,",
"start": 919.11,
"duration": 2.32
},
{
"text": "it's going to keep track of\nthese four values in particular.",
"start": 921.43,
"duration": 3.21
},
{
"text": "Every node is going to keep track of\na state, the state we're currently on.",
"start": 924.64,
"duration": 3.93
},
{
"text": "And every node is also going\nto keep track of a parent.",
"start": 928.57,
"duration": 2.97
},
{
"text": "A parent being the state\nbefore us, or the node",
"start": 931.54,
"duration": 2.88
},
{
"text": "that we used in order to\nget to this current state.",
"start": 934.42,
"duration": 3.24
},
{
"text": "And this is going to be\nrelevant because eventually,",
"start": 937.66,
"duration": 2.17
},
{
"text": "once we reach the goal node,\nonce we get to the end,",
"start": 939.83,
"duration": 2.91
},
{
"text": "we want to know what sequence of actions\nwe used in order to get to that goal.",
"start": 942.74,
"duration": 4.47
},
{
"text": "And the way we'll know that\nis by looking at these parents",
"start": 947.21,
"duration": 2.93
},
{
"text": "to keep track of what led us to the\ngoal, and what led us to that state,",
"start": 950.14,
"duration": 3.75
},
{
"text": "and what led us to the state\nbefore that, so on and so forth,",
"start": 953.89,
"duration": 2.82
},
{
"text": "backtracking our way to\nthe beginning so that we",
"start": 956.71,
"duration": 2.64
},
{
"text": "know the entire sequence of\nactions we needed in order",
"start": 959.35,
"duration": 2.64
},
{
"text": "to get from the beginning to the end.",
"start": 961.99,
"duration": 2.74
},
{
"text": "The node is also going to keep track\nof what action we took in order to get",
"start": 964.73,
"duration": 3.17
},
{
"text": "from the parent to the current state.",
"start": 967.9,
"duration": 2.22
},
{
"text": "And the node is also going\nto keep track of a path cost.",
"start": 970.12,
"duration": 3.58
},
{
"text": "In other words, it's going to\nkeep track of the number that",
"start": 973.7,
"duration": 2.57
},
{
"text": "represents how long it took to get\nfrom the initial state to the state",
"start": 976.27,
"duration": 4.17
},
{
"text": "that we currently happen to be at.",
"start": 980.44,
"duration": 1.64
},
{
"text": "And we'll see why this is relevant\nas we start to talk about some",
"start": 982.08,
"duration": 2.71
},
{
"text": "of the optimizations that we can make\nin terms of these search problems more",
"start": 984.79,
"duration": 3.24
},
{
"text": "generally.",
"start": 988.03,
"duration": 1.05
},
{
"text": "So this is the data\nstructure that we're going",
"start": 989.08,
"duration": 1.92
},
{
"text": "to use in order to solve the problem.",
"start": 991.0,
"duration": 1.92
},
{
"text": "And now let's talk about\nthe approach, how might we",
"start": 992.92,
"duration": 2.13
},
{
"text": "actually begin to solve the problem?",
"start": 995.05,
"duration": 2.8
},
{
"text": "Well, as you might imagine,\nwhat we're going to do",
"start": 997.85,
"duration": 2.09
},
{
"text": "is we're going to start\nat one particular state",
"start": 999.94,
"duration": 2.25
},
{
"text": "and we're just going\nto explore from there.",
"start": 1002.19,
"duration": 2.51
},
{
"text": "The intuition is that\nfrom a given state,",
"start": 1004.7,
"duration": 2.02
},
{
"text": "we have multiple options\nthat we could take,",
"start": 1006.72,
"duration": 2.04
},
{
"text": "and we're going to\nexplore those options.",
"start": 1008.76,
"duration": 2.04
},
{
"text": "And once we explore those options,\nwe'll find that more options than that",
"start": 1010.8,
"duration": 3.69
},
{
"text": "are going to make themselves available.",
"start": 1014.49,
"duration": 1.8
},
{
"text": "And we're going to consider\nall of the available options",
"start": 1016.29,
"duration": 2.79
},
{
"text": "to be stored inside of a single data\nstructure that we'll call the frontier.",
"start": 1019.08,
"duration": 4.23
},
{
"text": "The frontier is going to\nrepresent all of the things",
"start": 1023.31,
"duration": 2.4
},
{
"text": "that we could explore next, that\nwe haven't yet explored or visited.",
"start": 1025.71,
"duration": 4.73
},
{
"text": "So in our approach, we're\ngoing to begin this search",
"start": 1030.44,
"duration": 2.17
},
{
"text": "algorithm by starting with a frontier\nthat just contains one state.",
"start": 1032.61,
"duration": 4.41
},
{
"text": "The frontier is going to contain the\ninitial state because at the beginning,",
"start": 1037.02,
"duration": 3.45
},
{
"text": "that's the only state we know about.",
"start": 1040.47,
"duration": 1.5
},
{
"text": "That is the only state that exists.",
"start": 1041.97,
"duration": 2.36
},
{
"text": "And then our search algorithm is\neffectively going to follow a loop.",
"start": 1044.33,
"duration": 3.46
},
{
"text": "We're going to repeat some\nprocess again and again and again.",
"start": 1047.79,
"duration": 3.6
},
{
"text": "The first thing we're going to\ndo is if the frontier is empty,",
"start": 1051.39,
"duration": 3.93
},
{
"text": "then there's no solution.",
"start": 1055.32,
"duration": 1.29
},
{
"text": "And we can report that there\nis no way to get to the goal.",
"start": 1056.61,
"duration": 2.7
},
{
"text": "And that's certainly possible.",
"start": 1059.31,
"duration": 1.25
},
{
"text": "There are certain types of\nproblems that an AI might",
"start": 1060.56,
"duration": 2.17
},
{
"text": "try to explore and realize that there\nis no way to solve that problem.",
"start": 1062.73,
"duration": 4.21
},
{
"text": "And that's useful information\nfor humans to know, as well.",
"start": 1066.94,
"duration": 2.67
},
{
"text": "So if ever the frontier is empty, that\nmeans there's nothing left to explore,",
"start": 1069.61,
"duration": 3.95
},
{
"text": "and we haven't yet found a\nsolution so there is no solution.",
"start": 1073.56,
"duration": 3.6
},
{
"text": "There's nothing left to explore.",
"start": 1077.16,
"duration": 1.98
},
{
"text": "Otherwise what we'll do is we'll\nremove a node from the frontier.",
"start": 1079.14,
"duration": 3.88
},
{
"text": "So right now at the\nbeginning, the frontier",
"start": 1083.02,
"duration": 1.97
},
{
"text": "just contains one node\nrepresenting the initial state.",
"start": 1084.99,
"duration": 2.88
},
{
"text": "But over time, the frontier might grow.",
"start": 1087.87,
"duration": 1.68
},
{
"text": "It might contain multiple states.",
"start": 1089.55,
"duration": 1.53
},
{
"text": "And so here we're just going to remove\na single node from that frontier.",
"start": 1091.08,
"duration": 4.65
},
{
"text": "If that node happens to be a\ngoal, then we found a solution.",
"start": 1095.73,
"duration": 3.27
},
{
"text": "So we remove a node from the frontier\nand ask ourselves, is this the goal?",
"start": 1099.0,
"duration": 3.39
},
{
"text": "And we do that by applying the goal\ntest that we talked about earlier,",
"start": 1102.39,
"duration": 3.15
},
{
"text": "asking if we're at the destination or\nasking if all the numbers of the 15",
"start": 1105.54,
"duration": 3.54
},
{
"text": "puzzle happen to be in order.",
"start": 1109.08,
"duration": 2.17
},
{
"text": "So if the node contains the\ngoal, we found a solution.",
"start": 1111.25,
"duration": 2.61
},
{
"text": "Great.",
"start": 1113.86,
"duration": 0.5
},
{
"text": "We're done.",
"start": 1114.36,
"duration": 1.54
},
{
"text": "And otherwise, what we'll need to\ndo is we'll need to expand the node.",
"start": 1115.9,
"duration": 4.92
},
{
"text": "And this is a term in\nartificial intelligence.",
"start": 1120.82,
"duration": 2.12
},
{
"text": "To expand the node just means to look\nat all of the neighbors of that node.",
"start": 1122.94,
"duration": 3.93
},
{
"text": "In other words, consider\nall of the possible actions",
"start": 1126.87,
"duration": 2.76
},
{
"text": "that I could take from the state\nthat this node as representing",
"start": 1129.63,
"duration": 3.2
},
{
"text": "and what nodes could\nI get to from there.",
"start": 1132.83,
"duration": 2.44
},
{
"text": "We're going to take all of\nthose nodes, the next nodes",
"start": 1135.27,
"duration": 2.25
},
{
"text": "that I can get to from this\ncurrent one I'm looking at,",
"start": 1137.52,
"duration": 2.7
},
{
"text": "and add those to the frontier.",
"start": 1140.22,
"duration": 2.1
},
{
"text": "And then we'll repeat this process.",
"start": 1142.32,
"duration": 2.23
},
{
"text": "So at a very high level, the idea\nis we start with a frontier that",
"start": 1144.55,
"duration": 3.29
},
{
"text": "contains the initial state.",
"start": 1147.84,
"duration": 1.56
},
{
"text": "And we're constantly removing\na node from the frontier,",
"start": 1149.4,
"duration": 2.79
},
{
"text": "looking at where we can get to next,\nand adding those nodes to the frontier,",
"start": 1152.19,
"duration": 3.96
},
{
"text": "repeating this process over\nand over until either we remove",
"start": 1156.15,
"duration": 2.94
},
{
"text": "a node from the frontier and it\ncontains a goal, meaning we've solved",
"start": 1159.09,
"duration": 3.18
},
{
"text": "the problem.",
"start": 1162.27,
"duration": 1.05
},
{
"text": "Or we run into a situation where the\nfrontier is empty, at which point",
"start": 1163.32,
"duration": 3.63
},
{
"text": "we're left with no solution.",
"start": 1166.95,
"duration": 2.66
},
{
"text": "So let's actually try\nand take the pseudocode,",
"start": 1169.61,
"duration": 2.0
},
{
"text": "put it into practice by taking a look at\nan example of a sample search problem.",
"start": 1171.61,
"duration": 4.86
},
{
"text": "So right here I have a sample graph.",
"start": 1176.47,
"duration": 1.8
},
{
"text": "A is connected to B via this action,\nB is connected to node C and D, C",
"start": 1178.27,
"duration": 4.2
},
{
"text": "is connected to D, E is connected\nto F. And what I'd like to do",
"start": 1182.47,
"duration": 3.75
},
{
"text": "is have my AI find a path from A to E.\nWe want to get from this initial state",
"start": 1186.22,
"duration": 6.54
},
{
"text": "to this goal state.",
"start": 1192.76,
"duration": 2.22
},
{
"text": "So how are we going to do that?",
"start": 1194.98,
"duration": 1.46
},
{
"text": "Well, we're going to start\nwith the frontier that",
"start": 1196.44,
"duration": 2.05
},
{
"text": "contains the initial state.",
"start": 1198.49,
"duration": 1.27
},
{
"text": "This is going to represent our frontier.",
"start": 1199.76,
"duration": 1.79
},
{
"text": "So our frontier, initially, will\njust contain A, that initial state",
"start": 1201.55,
"duration": 3.54
},
{
"text": "where we're going to begin.",
"start": 1205.09,
"duration": 1.56
},
{
"text": "And now we'll repeat this process.",
"start": 1206.65,
"duration": 1.83
},
{
"text": "If the frontier is empty, no solution.",
"start": 1208.48,
"duration": 1.98
},
{
"text": "That's not a problem because\nthe frontier is not empty.",
"start": 1210.46,
"duration": 2.46
},
{
"text": "So we'll remove a node from the\nfrontier as the one to consider next.",
"start": 1212.92,
"duration": 4.16
},
{
"text": "There is only one node in the frontier.",
"start": 1217.08,
"duration": 1.64
},
{
"text": "So we'll go ahead and\nremove it from the frontier.",
"start": 1218.72,
"duration": 2.09
},
{
"text": "But now A, this initial node, this is\nthe node we're currently considering.",
"start": 1220.81,
"duration": 4.68
},
{
"text": "We follow the next step.",
"start": 1225.49,
"duration": 1.11
},
{
"text": "We ask ourselves, is this node the goal?",
"start": 1226.6,
"duration": 2.67
},
{
"text": "No, it's not.",
"start": 1229.27,
"duration": 0.61
},
{
"text": "A is not the goal.",
"start": 1229.88,
"duration": 0.89
},
{
"text": "E is the goal.",
"start": 1230.77,
"duration": 1.45
},
{
"text": "So we don't return the solution.",
"start": 1232.22,
"duration": 1.65
},
{
"text": "So instead, we go to this\nlast step, expand the node",
"start": 1233.87,
"duration": 3.29
},
{
"text": "and add the resulting\nnodes to the frontier.",
"start": 1237.16,
"duration": 2.79
},
{
"text": "What does that mean?",
"start": 1239.95,
"duration": 1.12
},
{
"text": "Well, it means take this state A and\nconsider where we could get to next.",
"start": 1241.07,
"duration": 3.98
},
{
"text": "And after A what we\ncould get to next is only",
"start": 1245.05,
"duration": 2.34
},
{
"text": "B. So that's what we get\nwhen we expand A. We find B.",
"start": 1247.39,
"duration": 3.72
},
{
"text": "And we add B to the frontier.",
"start": 1251.11,
"duration": 1.97
},
{
"text": "And now B is in the frontier\nand we repeat the process again.",
"start": 1253.08,
"duration": 3.22
},
{
"text": "We say, all right.",
"start": 1256.3,
"duration": 0.75
},
{
"text": "The frontier is not empty.",
"start": 1257.05,
"duration": 1.23
},
{
"text": "So let's remove B from the frontier.",
"start": 1258.28,
"duration": 2.16
},
{
"text": "B is now the node that\nwe're considering.",
"start": 1260.44,
"duration": 1.86
},
{
"text": "We ask ourselves, is B the goal?",
"start": 1262.3,
"duration": 1.86
},
{
"text": "No, it's not.",
"start": 1264.16,
"duration": 0.96
},
{
"text": "So we go ahead and expand B and add\nits resulting nodes to the frontier.",
"start": 1265.12,
"duration": 4.89
},
{
"text": "What happens when we expand B?",
"start": 1270.01,
"duration": 1.65
},
{
"text": "In other words, what nodes\ncan we get to from B?",
"start": 1271.66,
"duration": 3.09
},
{
"text": "Well, we can get to C\nand D. So we'll go ahead",
"start": 1274.75,
"duration": 2.43
},
{
"text": "and add C and D from the frontier.",
"start": 1277.18,
"duration": 1.92
},
{
"text": "And now we have two nodes\nin the frontier, C and D.",
"start": 1279.1,
"duration": 2.4
},
{
"text": "And we repeat the process again.",
"start": 1281.5,
"duration": 1.67
},
{
"text": "We remove a node from the\nfrontier, for now we'll",
"start": 1283.17,
"duration": 2.05
},
{
"text": "do so arbitrarily just by picking C.",
"start": 1285.22,
"duration": 2.04
},
{
"text": "We'll see why later how choosing which\nnode you remove from the frontier",
"start": 1287.26,
"duration": 3.29
},
{
"text": "is actually quite an important\npart of the algorithm.",
"start": 1290.55,
"duration": 2.29
},
{
"text": "But for now I'll arbitrarily\nremove C, say it's not the goal,",
"start": 1292.84,
"duration": 3.39
},
{
"text": "so we'll add E, the next\none to the frontier.",
"start": 1296.23,
"duration": 3.03
},
{
"text": "Then let's say I remove\nE from the frontier.",
"start": 1299.26,
"duration": 2.16
},
{
"text": "And now I'm currently looking at\nstate E. Is that a goal state?",
"start": 1301.42,
"duration": 4.2
},
{
"text": "It is because I'm trying\nto find a path from A to E.",
"start": 1305.62,
"duration": 2.79
},
{
"text": "So I would return the goal.",
"start": 1308.41,
"duration": 1.35
},
{
"text": "And that, now, would be\nthe solution, that I'm now",
"start": 1309.76,
"duration": 2.76
},
{
"text": "able to return the solution\nand I found a path from A to E.",
"start": 1312.52,
"duration": 4.77
},
{
"text": "So this is the general idea, the general\napproach of this search algorithm,",
"start": 1317.29,
"duration": 3.42
},
{
"text": "to follow these steps constantly\nremoving nodes from the frontier",
"start": 1320.71,
"duration": 3.51
},
{
"text": "until we're able to find a solution.",
"start": 1324.22,
"duration": 2.32
},
{
"text": "So the next question\nyou might reasonably ask",
"start": 1326.54,
"duration": 1.97
},
{
"text": "is, what could go wrong here?",
"start": 1328.51,
"duration": 2.1
},
{
"text": "What are the potential problems\nwith an approach like this?",
"start": 1330.61,
"duration": 3.48
},
{
"text": "And here's one example of a problem that\ncould arise from this sort of approach.",
"start": 1334.09,
"duration": 4.0
},
{
"text": "Imagine this same graph, same\nas before, with one change.",
"start": 1338.09,
"duration": 3.89
},
{
"text": "The change being, now, instead\nof just an arrow from A to B,",
"start": 1341.98,
"duration": 3.09
},
{
"text": "we also have an arrow from B to A,\nmeaning we can go in both directions.",
"start": 1345.07,
"duration": 4.18
},
{
"text": "And this is true in\nsomething like the 15 puzzle",
"start": 1349.25,
"duration": 2.33
},
{
"text": "where when I slide a\ntile to the right, I",
"start": 1351.58,
"duration": 1.98
},
{
"text": "could then slide a tile to the left\nto get back to the original position.",
"start": 1353.56,
"duration": 3.42
},
{
"text": "I could go back and\nforth between A and B.",
"start": 1356.98,
"duration": 2.55
},
{
"text": "And that's what these double arrows\nsymbolize, the idea that from one state",
"start": 1359.53,
"duration": 3.39
},
{
"text": "I can get to another\nand then I can get back.",
"start": 1362.92,
"duration": 2.39
},
{
"text": "And that's true in many search problems.",
"start": 1365.31,
"duration": 2.29
},
{
"text": "What's going to happen if I try\nto apply the same approach now?",
"start": 1367.6,
"duration": 3.39
},
{
"text": "Well, I'll begin with A, same as before.",
"start": 1370.99,
"duration": 2.29
},
{
"text": "And I'll remove A from the frontier.",
"start": 1373.28,
"duration": 1.88
},
{
"text": "And then I'll consider where I can get\nto from A. And after A, the only place",
"start": 1375.16,
"duration": 4.26
},
{
"text": "I can get choice B so B\ngoes into the frontier.",
"start": 1379.42,
"duration": 2.94
},
{
"text": "Then I'll say, all right.",
"start": 1382.36,
"duration": 1.05
},
{
"text": "Let's take a look at B. That's the\nonly thing left in the frontier.",
"start": 1383.41,
"duration": 2.79
},
{
"text": "Where can I get to from B?",
"start": 1386.2,
"duration": 1.95
},
{
"text": "Before it was just C and D, but\nnow because of that reverse arrow,",
"start": 1388.15,
"duration": 4.26
},
{
"text": "I can get to A or C or D. So all\nthree A, C, and D. All of those",
"start": 1392.41,
"duration": 5.54
},
{
"text": "now go into the frontier.",
"start": 1397.95,
"duration": 1.12
},
{
"text": "They are places I can get to from B.\nAnd now I remove one from the frontier,",
"start": 1399.07,
"duration": 4.23
},
{
"text": "and, you know, maybe I'm\nunlucky and maybe I pick A.",
"start": 1403.3,
"duration": 2.57
},
{
"text": "And now I'm looking at A again.",
"start": 1405.87,
"duration": 1.95
},
{
"text": "And I consider where can I get to from\nA. And from A, well I can get to B.",
"start": 1407.82,
"duration": 3.67
},
{
"text": "And now we start to see the\nproblem, that if I'm not careful,",
"start": 1411.49,
"duration": 2.59
},
{
"text": "I go from A to B and then\nback to A and then to B again.",
"start": 1414.08,
"duration": 2.96
},
{
"text": "And I could be going in this\ninfinite loop where I never",
"start": 1417.04,
"duration": 2.34
},
{
"text": "make any progress because I'm constantly\njust going back and forth between two",
"start": 1419.38,
"duration": 3.87
},
{
"text": "states that I've already seen.",
"start": 1423.25,
"duration": 2.14
},
{
"text": "So what is the solution to this?",
"start": 1425.39,
"duration": 1.34
},
{
"text": "We need some way to\ndeal with this problem.",
"start": 1426.73,
"duration": 2.25
},
{
"text": "And the way that we can\ndeal with this problem",
"start": 1428.98,
"duration": 1.92
},
{
"text": "is by somehow keeping track of\nwhat we've already explored.",
"start": 1430.9,
"duration": 3.7
},
{
"text": "And the logic is going to be, well,\nif we've already explored the state,",
"start": 1434.6,
"duration": 3.55
},
{
"text": "there's no reason to go back to it.",
"start": 1438.15,
"duration": 1.48
},
{
"text": "Once we've explored a\nstate, don't go back to it,",
"start": 1439.63,
"duration": 2.07
},
{
"text": "don't bother adding it to the frontier.",
"start": 1441.7,
"duration": 2.16
},
{
"text": "There's no need to.",
"start": 1443.86,
"duration": 1.71
},
{
"text": "So here is going to be\nour revised approach,",
"start": 1445.57,
"duration": 2.07
},
{
"text": "a better way to approach\nthis sort of search problem.",
"start": 1447.64,
"duration": 2.81
},
{
"text": "And it's going to look very similar\njust with a couple of modifications.",
"start": 1450.45,
"duration": 3.58
},
{
"text": "We'll start with a frontier\nthat contains the initial state.",
"start": 1454.03,
"duration": 2.82
},
{
"text": "Same as before.",
"start": 1456.85,
"duration": 1.26
},
{
"text": "But now we'll start with\nanother data structure,",
"start": 1458.11,
"duration": 3.15
},
{
"text": "which would just be a set of\nnodes that we've already explored.",
"start": 1461.26,
"duration": 3.03
},
{
"text": "So what are the states we've explored?",
"start": 1464.29,
"duration": 1.59
},
{
"text": "Initially, it's empty.",
"start": 1465.88,
"duration": 1.45
},
{
"text": "We have an empty explored set.",
"start": 1467.33,
"duration": 2.38
},
{
"text": "And now we repeat.",
"start": 1469.71,
"duration": 1.22
},
{
"text": "If the frontier is empty, no solution.",
"start": 1470.93,
"duration": 2.21
},
{
"text": "Same as before.",
"start": 1473.14,
"duration": 1.26
},
{
"text": "We remove a node from\nthe frontier, we check",
"start": 1474.4,
"duration": 2.06
},
{
"text": "to see if it's a goal\nstate, return the solution.",
"start": 1476.46,
"duration": 2.05
},
{
"text": "None of this is any different so far.",
"start": 1478.51,
"duration": 2.16
},
{
"text": "But now, what we're going\nto do is we're going",
"start": 1480.67,
"duration": 1.98
},
{
"text": "to add the node to the explored state.",
"start": 1482.65,
"duration": 3.15
},
{
"text": "So if it happens to be the case that\nwe remove a node from the frontier",
"start": 1485.8,
"duration": 3.99
},
{
"text": "and it's not the goal, we'll\nadd it to the explored set",
"start": 1489.79,
"duration": 2.63
},
{
"text": "so that we know we've\nalready explored it.",
"start": 1492.42,
"duration": 1.75
},
{
"text": "We don't need to go back to it again\nif it happens to come up later.",
"start": 1494.17,
"duration": 3.84
},
{
"text": "And then the final\nstep, we expand the node",
"start": 1498.01,
"duration": 2.43
},
{
"text": "and we add the resulting\nnodes to the frontier.",
"start": 1500.44,
"duration": 2.68
},
{
"text": "But before we just always added the\nresulting nodes to the frontier,",
"start": 1503.12,
"duration": 2.84
},
{
"text": "we're going to be a little\ncleverer about it this time.",
"start": 1505.96,
"duration": 2.31
},
{
"text": "We're only going to add\nthe nodes to the frontier",
"start": 1508.27,
"duration": 2.7
},
{
"text": "if they aren't already in\nthe frontier and if they",
"start": 1510.97,
"duration": 3.15
},
{
"text": "aren't already in the explored set.",
"start": 1514.12,
"duration": 2.67
},
{
"text": "So we'll check both the\nfrontier and the explored set,",
"start": 1516.79,
"duration": 2.55
},
{
"text": "make sure that the node isn't\nalready in one of those two,",
"start": 1519.34,
"duration": 3.18
},
{
"text": "and so long as it isn't, then we'll\ngo ahead and add to the frontier",
"start": 1522.52,
"duration": 3.18
},
{
"text": "but not otherwise.",
"start": 1525.7,
"duration": 1.89
},
{
"text": "And so that revised\napproach is ultimately",
"start": 1527.59,
"duration": 1.8
},
{
"text": "what's going to help\nmake sure that we don't",
"start": 1529.39,
"duration": 1.83
},
{
"text": "go back and forth between two nodes.",
"start": 1531.22,
"duration": 2.99
},
{
"text": "Now the one point that I've\nkind of glossed over here",
"start": 1534.21,
"duration": 2.21
},
{
"text": "so far is this step here,\nremoving a node from the frontier.",
"start": 1536.42,
"duration": 4.35
},
{
"text": "Before I just chose arbitrarily, like\nlet's just remove a node and that's it.",
"start": 1540.77,
"duration": 4.0
},
{
"text": "But it turns out it's\nactually quite important",
"start": 1544.77,
"duration": 1.94
},
{
"text": "how we decide to structure\nour frontier, how we add them,",
"start": 1546.71,
"duration": 3.15
},
{
"text": "and how we remove our nodes.",
"start": 1549.86,
"duration": 1.98
},
{
"text": "The frontier is a data structure.",
"start": 1551.84,
"duration": 1.53
},
{
"text": "And we need to make a\nchoice about in what order",
"start": 1553.37,
"duration": 2.59
},
{
"text": "are we going to be removing elements?",
"start": 1555.96,
"duration": 2.12
},
{
"text": "And one of the simplest data structures\nfor adding and removing elements",
"start": 1558.08,
"duration": 3.52
},
{
"text": "is something called a stack.",
"start": 1561.6,
"duration": 1.7
},
{
"text": "And a stack is a data structure that\nis a last in, first out data type.",
"start": 1563.3,
"duration": 4.66
},
{
"text": "Which means the last thing\nthat I add to the frontier",
"start": 1567.96,
"duration": 2.96
},
{
"text": "is going to be the first thing\nthat I remove from the frontier.",
"start": 1570.92,
"duration": 3.94
},
{
"text": "So the most recent thing to go into the\nstack, or the frontier in this case,",
"start": 1574.86,
"duration": 3.86
},
{
"text": "is going to be the node that I explore.",
"start": 1578.72,
"duration": 2.94
},
{
"text": "So let's see what happens if I\napply this stack based approach",
"start": 1581.66,
"duration": 3.51
},
{
"text": "to something like this problem,\nfinding a path from A to E.",
"start": 1585.17,
"duration": 4.68
},
{
"text": "What's going to happen?",
"start": 1589.85,
"duration": 1.06
},
{
"text": "Well, again we'll start with\nA. And we'll say, all right.",
"start": 1590.91,
"duration": 2.43
},
{
"text": "Let's go ahead and look at A first.",
"start": 1593.34,
"duration": 1.61
},
{
"text": "And then, notice this time, we've\nadded A to the explored set.",
"start": 1594.95,
"duration": 4.02
},
{
"text": "A is something we've now explored,\nwe have this data structure",
"start": 1598.97,
"duration": 2.82
},
{
"text": "that's keeping track.",
"start": 1601.79,
"duration": 1.53
},
{
"text": "We then say from A we can\nget to B. And all right.",
"start": 1603.32,
"duration": 3.3
},
{
"text": "From B what can we do?",
"start": 1606.62,
"duration": 1.42
},
{
"text": "Well from B, we can explore\nB and get to both C and D.",
"start": 1608.04,
"duration": 4.19
},
{
"text": "So we added C and then D. So\nnow when we explore a node,",
"start": 1612.23,
"duration": 4.8
},
{
"text": "we're going to treat the frontier\nas a stack, last in, first out.",
"start": 1617.03,
"duration": 3.36
},
{
"text": "D was the last one to come in so\nwe'll go ahead and explore that next.",
"start": 1620.39,
"duration": 4.1
},
{
"text": "And say, all right, where\ncan we get to from D?",
"start": 1624.49,
"duration": 1.96
},
{
"text": "Well we can get to F. And so, all right.",
"start": 1626.45,
"duration": 2.19
},
{
"text": "We'll put F into the frontier.",
"start": 1628.64,
"duration": 2.58
},
{
"text": "And now because the\nfrontier is a stack, F",
"start": 1631.22,
"duration": 2.58
},
{
"text": "is the most recent thing\nthat's gone in the stack.",
"start": 1633.8,
"duration": 2.83
},
{
"text": "So F is what we'll explore next.",
"start": 1636.63,
"duration": 1.37
},
{
"text": "We'll explore F and say, all right.",
"start": 1638.0,
"duration": 1.96
},
{
"text": "Where can we get you from F?",
"start": 1639.96,
"duration": 1.64
},
{
"text": "Well, we can't get anywhere so\nnothing gets added to the frontier.",
"start": 1641.6,
"duration": 3.37
},
{
"text": "So now what was the new most\nrecent thing added to the frontier?",
"start": 1644.97,
"duration": 2.9
},
{
"text": "Well it's not C, the only\nthing left in the frontier.",
"start": 1647.87,
"duration": 2.61
},
{
"text": "We'll explore that from which we can\nsay, all right, from C we can get to E.",
"start": 1650.48,
"duration": 3.62
},
{
"text": "So E goes into the frontier.",
"start": 1654.1,
"duration": 1.28
},
{
"text": "And then we say, all right.",
"start": 1655.38,
"duration": 1.13
},
{
"text": "Let's look at E and E\nis now the solution.",
"start": 1656.51,
"duration": 2.71
},
{
"text": "And now we've solved the problem.",
"start": 1659.22,
"duration": 2.52
},
{
"text": "So when we treat the\nfrontier like a stack,",
"start": 1661.74,
"duration": 1.97
},
{
"text": "a last in, first out data\nstructure, that's the result we get.",
"start": 1663.71,
"duration": 3.96
},
{
"text": "We go from A to B to D to F, and then\nwe sort of backed up and went down",
"start": 1667.67,
"duration": 4.98
},
{
"text": "to C and then E. And it's important\nto get a visual sense for how",
"start": 1672.65,
"duration": 3.54
},
{
"text": "this algorithm is working.",
"start": 1676.19,
"duration": 1.17
},
{
"text": "We went very deep in\nthis search tree, so",
"start": 1677.36,
"duration": 2.19
},
{
"text": "to speak, all the way until the\nbottom where we hit a dead end.",
"start": 1679.55,
"duration": 3.12
},
{
"text": "And then we effectively backed\nup and explored this other route",
"start": 1682.67,
"duration": 3.56
},
{
"text": "that we didn't try before.",
"start": 1686.23,
"duration": 1.51
},
{
"text": "And it's this going very\ndeep in the search tree idea,",
"start": 1687.74,
"duration": 2.79
},
{
"text": "this way the algorithm ends up\nworking when we use a stack,",
"start": 1690.53,
"duration": 3.27
},
{
"text": "that we call this version of the\nalgorithm depth first search.",
"start": 1693.8,
"duration": 4.41
},
{
"text": "Depth first search is\nthe search algorithm",
"start": 1698.21,
"duration": 2.13
},
{
"text": "where we always explore the\ndeepest node in the frontier.",
"start": 1700.34,
"duration": 3.5
},
{
"text": "We keep going deeper and\ndeeper through our search tree.",
"start": 1703.84,
"duration": 3.16
},
{
"text": "And then if we hit a dead end, we back\nup and we try something else instead.",
"start": 1707.0,
"duration": 4.66
},
{
"text": "But depth first search is just\none of the possible search options",
"start": 1711.66,
"duration": 3.01
},
{
"text": "that we could use.",
"start": 1714.67,
"duration": 1.11
},
{
"text": "It turns out that there is\nanother algorithm called",
"start": 1715.78,
"duration": 2.34
},
{
"text": "breadth first search, which\nbehaves very similarly to depth",
"start": 1718.12,
"duration": 3.03
},
{
"text": "first search with one difference.",
"start": 1721.15,
"duration": 1.89
},
{
"text": "Instead of always exploring the\ndeepest node in the search tree the way",
"start": 1723.04,
"duration": 3.78
},
{
"text": "the depth first search\ndoes, breadth first search",
"start": 1726.82,
"duration": 2.1
},
{
"text": "is always going to explore the\nshallowest node in the frontier.",
"start": 1728.92,
"duration": 4.45
},
{
"text": "So what does that mean?",
"start": 1733.37,
"duration": 0.99
},
{
"text": "Well, it means that instead\nof using a stack, which",
"start": 1734.36,
"duration": 2.54
},
{
"text": "depth first search, or DFS, used\nwhere the most recent item added",
"start": 1736.9,
"duration": 3.75
},
{
"text": "to the frontier is the one we'll explore\nnext, in breadth first search, or BFS,",
"start": 1740.65,
"duration": 5.7
},
{
"text": "will instead use a queue where a\nqueue is a first in, first out data",
"start": 1746.35,
"duration": 4.77
},
{
"text": "type, where the very first thing we\nadd to the frontier is the first one",
"start": 1751.12,
"duration": 3.54
},
{
"text": "we'll explore.",
"start": 1754.66,
"duration": 0.81
},
{
"text": "And they effectively\nform a line or a queue,",
"start": 1755.47,
"duration": 1.91
},
{
"text": "where the earlier you arrive in the\nfrontier, the earlier you get explored.",
"start": 1757.38,
"duration": 5.89
},
{
"text": "So what would that mean for the same\nexact problem finding a path from A",
"start": 1763.27,
"duration": 3.33
},
{
"text": "to E?",
"start": 1766.6,
"duration": 1.35
},
{
"text": "Well we start with A, same as before.",
"start": 1767.95,
"duration": 2.25
},
{
"text": "Then we'll go ahead and have explored\nA, and say, where can we get to from A?",
"start": 1770.2,
"duration": 3.21
},
{
"text": "Well, from A we can get\nto B. Same as before.",
"start": 1773.41,
"duration": 2.67
},
{
"text": "From B, same as before.",
"start": 1776.08,
"duration": 1.26
},
{
"text": "We can get to C and D so C and\nD get added to the frontier.",
"start": 1777.34,
"duration": 3.73
},
{
"text": "This time, though, we\nadded C to the frontier",
"start": 1781.07,
"duration": 2.39
},
{
"text": "before D so we'll explore C first.",
"start": 1783.46,
"duration": 3.21
},
{
"text": "So C gets explored.",
"start": 1786.67,
"duration": 1.68
},
{
"text": "And from C, where can we get to?",
"start": 1788.35,
"duration": 1.67
},
{
"text": "Well, we can get to E.",
"start": 1790.02,
"duration": 1.45
},
{
"text": "So E gets added to the frontier.",
"start": 1791.47,
"duration": 2.22
},
{
"text": "But because D was explored\nbefore E, we'll look at D next.",
"start": 1793.69,
"duration": 4.6
},
{
"text": "So we'll explore D and say,\nwhere can we get to from D?",
"start": 1798.29,
"duration": 2.3
},
{
"text": "We can get to F. And only\nthen will we say, all right.",
"start": 1800.59,
"duration": 3.27
},
{
"text": "Now we can get to E. And so what\nbreadth first search, or BFS,",
"start": 1803.86,
"duration": 3.69
},
{
"text": "did is we started here,\nwe looked at both C and D,",
"start": 1807.55,
"duration": 4.74
},
{
"text": "and then we looked at\nE. Effectively we're",
"start": 1812.29,
"duration": 1.95
},
{
"text": "looking at things one away\nfrom the initial state,",
"start": 1814.24,
"duration": 2.55
},
{
"text": "then two away from the initial state.",
"start": 1816.79,
"duration": 1.83
},
{
"text": "And only then, things that are\nthree away from the initial state.",
"start": 1818.62,
"duration": 3.88
},
{
"text": "Unlike depth first search, which\njust went as deep as possible",
"start": 1822.5,
"duration": 3.47
},
{
"text": "into the search tree until\nit hit a dead end and then,",
"start": 1825.97,
"duration": 2.73
},
{
"text": "ultimately, had to back up.",
"start": 1828.7,
"duration": 2.22
},
{
"text": "So these now are two\ndifferent search algorithms",
"start": 1830.92,
"duration": 2.4
},
{
"text": "that we could apply in order\nto try and solve a problem.",
"start": 1833.32,
"duration": 2.48
},
{
"text": "And let's take a look at\nhow these would actually",
"start": 1835.8,
"duration": 2.05
},
{
"text": "work in practice with something\nlike maze solving, for example.",
"start": 1837.85,
"duration": 4.06
},
{
"text": "So here's an example of a maze.",
"start": 1841.91,
"duration": 1.46
},
{
"text": "These empty cells represent\nplaces where our agent can move.",
"start": 1843.37,
"duration": 3.21
},
{
"text": "These darkened gray\ncells and represent walls",
"start": 1846.58,
"duration": 2.76
},
{
"text": "that the agent can't pass through.",
"start": 1849.34,
"duration": 1.71
},
{
"text": "And, ultimately, our\nagent, our AI, is going",
"start": 1851.05,
"duration": 2.7
},
{
"text": "to try to find a way\nto get from position A",
"start": 1853.75,
"duration": 2.84
},
{
"text": "to position B via some sequence of\nactions, where those actions are left,",
"start": 1856.59,
"duration": 4.09
},
{
"text": "right, up, and down.",
"start": 1860.68,
"duration": 2.25
},
{
"text": "What will depth first\nsearch do in this case?",
"start": 1862.93,
"duration": 2.49
},
{
"text": "Well depth first search\nwill just follow one path.",
"start": 1865.42,
"duration": 2.85
},
{
"text": "If it reaches a fork in the road where\nit has multiple different options,",
"start": 1868.27,
"duration": 3.32
},
{
"text": "depth first search is just, in\nthis case, going to choose one.",
"start": 1871.59,
"duration": 2.61
},
{
"text": "There isn't a real preference.",
"start": 1874.2,
"duration": 1.36
},
{
"text": "But it's going to keep following\none until it hits a dead end.",
"start": 1875.56,
"duration": 3.72
},
{
"text": "And when it hits a dead\nend, depth first search",
"start": 1879.28,
"duration": 2.34
},
{
"text": "effectively goes back to\nthe last decision point",
"start": 1881.62,
"duration": 3.18
},
{
"text": "and tries the other path.",
"start": 1884.8,
"duration": 1.53
},
{
"text": "Fully exhausting this\nentire path and when",
"start": 1886.33,
"duration": 2.43
},
{
"text": "it realizes that, OK,\nthe goal is not here,",
"start": 1888.76,
"duration": 2.13
},
{
"text": "then it turns its\nattention to this path.",
"start": 1890.89,
"duration": 1.86
},
{
"text": "It goes as deep as possible.",
"start": 1892.75,
"duration": 1.86
},
{
"text": "When it hits a dead end, it backs\nup and then tries this other path,",
"start": 1894.61,
"duration": 4.44
},
{
"text": "keeps going as deep as possible\ndown one particular path,",
"start": 1899.05,
"duration": 3.06
},
{
"text": "and when it realizes that that's\na dead end, then it'll back up.",
"start": 1902.11,
"duration": 3.36
},
{
"text": "And then ultimately find\nits way to the goal.",
"start": 1905.47,
"duration": 2.76
},
{
"text": "And maybe you got lucky and maybe you\nmade a different choice earlier on,",
"start": 1908.23,
"duration": 3.45
},
{
"text": "but ultimately this is how depth\nfirst search is going to work.",
"start": 1911.68,
"duration": 2.91
},
{
"text": "It's going to keep following\nuntil it hits a dead end.",
"start": 1914.59,
"duration": 2.31
},
{
"text": "And when it hits a dead end, it backs\nup and looks for a different solution.",
"start": 1916.9,
"duration": 3.96
},
{
"text": "And so one thing you\nmight reasonably ask",
"start": 1920.86,
"duration": 1.71
},
{
"text": "is, is this algorithm\nalways going to work?",
"start": 1922.57,
"duration": 2.44
},
{
"text": "Will it always actually find a way to\nget from the initial state to the goal?",
"start": 1925.01,
"duration": 4.19
},
{
"text": "And it turns out that\nas long as our maze",
"start": 1929.2,
"duration": 2.22
},
{
"text": "is finite, as long as they're\nthat finitely many spaces where",
"start": 1931.42,
"duration": 3.21
},
{
"text": "we can travel, then yes.",
"start": 1934.63,
"duration": 1.26
},
{
"text": "Depth first search is going to find\na solution because eventually it",
"start": 1935.89,
"duration": 4.01
},
{
"text": "will just explore everything.",
"start": 1939.9,
"duration": 1.39
},
{
"text": "If the maze happens to be infinite and\nthere's an infinite state space, which",
"start": 1941.29,
"duration": 3.3
},
{
"text": "does exist in certain types of problems,\nthen it's a slightly different story.",
"start": 1944.59,
"duration": 3.76
},
{
"text": "But as long as our maze\nhas finitely many squares,",
"start": 1948.35,
"duration": 2.6
},
{
"text": "we're going to find a solution.",
"start": 1950.95,
"duration": 2.07
},
{
"text": "The next question, though,\nthat we want to ask",
"start": 1953.02,
"duration": 1.92
},
{
"text": "is, is it going to be a good solution?",
"start": 1954.94,
"duration": 2.1
},
{
"text": "Is it the optimal\nsolution that we can find?",
"start": 1957.04,
"duration": 2.7
},
{
"text": "And the answer there is not necessarily.",
"start": 1959.74,
"duration": 2.45
},
{
"text": "And let's take a look\nat an example of that.",
"start": 1962.19,
"duration": 1.84
},
{
"text": "In this maze, for example, we're again\ntrying to find our way from A to B.",
"start": 1964.03,
"duration": 4.86
},
{
"text": "And you notice here there are\nmultiple possible solutions.",
"start": 1968.89,
"duration": 2.58
},
{
"text": "We could go this way, or\nwe could go up in order",
"start": 1971.47,
"duration": 3.09
},
{
"text": "to make our way from A\nto B. Now if we're lucky,",
"start": 1974.56,
"duration": 2.79
},
{
"text": "depth first search will choose this way\nand get to B. But there's no reason,",
"start": 1977.35,
"duration": 3.69
},
{
"text": "necessarily, why depth\nfirst search would choose",
"start": 1981.04,
"duration": 2.1
},
{
"text": "between going up or going to the right.",
"start": 1983.14,
"duration": 2.3
},
{
"text": "It's sort of an arbitrary\ndecision point because both",
"start": 1985.44,
"duration": 2.83
},
{
"text": "are going to be added to the frontier.",
"start": 1988.27,
"duration": 2.15
},
{
"text": "And ultimately, if we get\nunlucky, depth first search",
"start": 1990.42,
"duration": 2.89
},
{
"text": "might choose to explore this\npath first because it's just",
"start": 1993.31,
"duration": 2.64
},
{
"text": "a random choice at this point.",
"start": 1995.95,
"duration": 1.48
},
{
"text": "It will explore, explore,\nexplore, and it'll eventually",
"start": 1997.43,
"duration": 3.14
},
{
"text": "find the goal, this particular\npath, when in actuality there",
"start": 2000.57,
"duration": 3.54
},
{
"text": "was a better path.",
"start": 2004.11,
"duration": 0.9
},
{
"text": "There was a more optimal\nsolution that used fewer steps,",
"start": 2005.01,
"duration": 3.93
},
{
"text": "assuming we're measuring the cost of a\nsolution based on the number of steps",
"start": 2008.94,
"duration": 3.72
},
{
"text": "that we need to take.",
"start": 2012.66,
"duration": 1.27
},
{
"text": "So depth first search, if\nwe're unlucky, might end up",
"start": 2013.93,
"duration": 2.75
},
{
"text": "not finding the best solution when\na better solution is available.",
"start": 2016.68,
"duration": 5.2
},
{
"text": "So if that's DFS, depth first search.",
"start": 2021.88,
"duration": 2.49
},
{
"text": "How does BFS, or breadth\nfirst search, compare?",
"start": 2024.37,
"duration": 2.93
},
{
"text": "How would it work in this\nparticular situation?",
"start": 2027.3,
"duration": 2.25
},
{
"text": "Well the algorithm is going to\nlook very different visually",
"start": 2029.55,
"duration": 2.94
},
{
"text": "in terms of how BFS explores.",
"start": 2032.49,
"duration": 2.31
},
{
"text": "Because BFS looks at\nshallower nodes first,",
"start": 2034.8,
"duration": 3.0
},
{
"text": "the idea is going to be BFS will\nfirst look at all of the nodes that",
"start": 2037.8,
"duration": 4.05
},
{
"text": "are one away from the initial state.",
"start": 2041.85,
"duration": 2.34
},
{
"text": "Look here and look here, for example.",
"start": 2044.19,
"duration": 1.86
},
{
"text": "Just at the two nodes that are\nimmediately next to this initial state.",
"start": 2046.05,
"duration": 4.54
},
{
"text": "Then it will explore\nnodes that are two away,",
"start": 2050.59,
"duration": 1.88
},
{
"text": "looking at the state and\nthat state, for example.",
"start": 2052.47,
"duration": 2.58
},
{
"text": "Then it will explore nodes that are\nthree away, this state and that state.",
"start": 2055.05,
"duration": 3.09
},
{
"text": "Whereas depth first search just\npicked one path and kept following it,",
"start": 2058.14,
"duration": 4.05
},
{
"text": "breadth first search\non the other hand, is",
"start": 2062.19,
"duration": 2.02
},
{
"text": "taking the option of exploring\nall of the possible paths",
"start": 2064.21,
"duration": 3.36
},
{
"text": "kind of at the same time,\nbouncing back between them,",
"start": 2067.57,
"duration": 2.58
},
{
"text": "looking deeper and deeper\nat each one, but making",
"start": 2070.15,
"duration": 2.37
},
{
"text": "sure to explore the shallower\nones or the ones that are",
"start": 2072.52,
"duration": 2.88
},
{
"text": "closer to the initial state earlier.",
"start": 2075.4,
"duration": 2.66
},
{
"text": "So we'll keep following this pattern,\nlooking at things that are four away,",
"start": 2078.06,
"duration": 3.13
},
{
"text": "looking at things that\nare five away, looking",
"start": 2081.19,
"duration": 1.98
},
{
"text": "at things that are six away, until\neventually we make our way to the goal.",
"start": 2083.17,
"duration": 5.06
},
{
"text": "And in this case, it's true we\nhad to explore some states that",
"start": 2088.23,
"duration": 2.86
},
{
"text": "ultimately didn't lead us anywhere.",
"start": 2091.09,
"duration": 1.79
},
{
"text": "But the path that we found to\nthe goal was the optimal path.",
"start": 2092.88,
"duration": 3.41
},
{
"text": "This is the shortest way that\nwe could get to the goal.",
"start": 2096.29,
"duration": 3.56
},
{
"text": "And so, what might happen\nthen in a larger maze?",
"start": 2099.85,
"duration": 3.12
},
{
"text": "Well let's take a look\nat something like this",
"start": 2102.97,
"duration": 1.88
},
{
"text": "and how breadth first\nsearch is going to behave.",
"start": 2104.85,
"duration": 2.01
},
{
"text": "Well, breadth first\nsearch, again, will just",
"start": 2106.86,
"duration": 1.83
},
{
"text": "keep following the states until\nit receives a decision point.",
"start": 2108.69,
"duration": 2.58
},
{
"text": "It could go either left or right.",
"start": 2111.27,
"duration": 2.19
},
{
"text": "And while DFS just picked\none and kept following",
"start": 2113.46,
"duration": 3.39
},
{
"text": "that until it hit a dead end, BFS on\nthe other hand, will explore both.",
"start": 2116.85,
"duration": 4.77
},
{
"text": "It'll say, look at this\nnode, then this node,",
"start": 2121.62,
"duration": 2.1
},
{
"text": "and I'll look at this node, then\nthat node, so on and so forth.",
"start": 2123.72,
"duration": 3.79
},
{
"text": "And when it hits a decision point here,\nrather than pick one left or two right",
"start": 2127.51,
"duration": 4.34
},
{
"text": "and explore that path, it will again\nexplore both alternating between them,",
"start": 2131.85,
"duration": 4.5
},
{
"text": "going deeper and deeper.",
"start": 2136.35,
"duration": 1.03
},
{
"text": "Will explore here, and then maybe\nhere and here, and then keep going.",
"start": 2137.38,
"duration": 4.19
},
{
"text": "Explore here and slowly make\nour way, you can visually",
"start": 2141.57,
"duration": 3.21
},
{
"text": "see further and further out.",
"start": 2144.78,
"duration": 1.71
},
{
"text": "Once we get to this\ndecision point, we'll",
"start": 2146.49,
"duration": 1.71
},
{
"text": "explore both up and down\nuntil, ultimately, we",
"start": 2148.2,
"duration": 4.59
},
{
"text": "make our way to the goal.",
"start": 2152.79,
"duration": 2.78
},
{
"text": "And what you'll notice is,\nyes, breadth first search",
"start": 2155.57,
"duration": 2.68
},
{
"text": "did find our way from A to B by\nfollowing this particular path.",
"start": 2158.25,
"duration": 4.38
},
{
"text": "But it needed to explore a lot\nof states in order to do so.",
"start": 2162.63,
"duration": 3.78
},
{
"text": "And so we see some trade\nhere between DFS and BFS.",
"start": 2166.41,
"duration": 3.03
},
{
"text": "That in DFS there may be some cases\nwhere there is some memory savings,",
"start": 2169.44,
"duration": 3.99
},
{
"text": "as compared to a breadth\nfirst approach where",
"start": 2173.43,
"duration": 2.85
},
{
"text": "breadth first search, in this case,\nhad to explore a lot of states.",
"start": 2176.28,
"duration": 2.93
},
{
"text": "But maybe that won't always be the case.",
"start": 2179.21,
"duration": 3.26
},
{
"text": "So now let's actually turn\nour attention to some code.",
"start": 2182.47,
"duration": 2.45
},
{
"text": "And look at the code\nthat we could actually",
"start": 2184.92,
"duration": 1.8
},
{
"text": "write in order to implement something\nlike depth first search or breadth",
"start": 2186.72,
"duration": 3.69
},
{
"text": "for the search in the context\nof solving a maze, for example.",
"start": 2190.41,
"duration": 4.5
},
{
"text": "So I'll go ahead and\ngo into my terminal.",
"start": 2194.91,
"duration": 2.43
},
{
"text": "And what I have here inside of\nmaze.pi is an implementation",
"start": 2197.34,
"duration": 3.9
},
{
"text": "of this same idea of maze solving.",
"start": 2201.24,
"duration": 2.4
},
{
"text": "I've defined a class called\nnode that in this case",
"start": 2203.64,
"duration": 3.06
},
{
"text": "is keeping track of the state,\nthe parent, in other words",
"start": 2206.7,
"duration": 2.92
},
{
"text": "the state before the\nstate, and the action.",
"start": 2209.62,
"duration": 2.24
},
{
"text": "In this case, we're not\nkeeping track of the path cost",
"start": 2211.86,
"duration": 2.25
},
{
"text": "because we can calculate the\ncost of the path at the end",
"start": 2214.11,
"duration": 2.67
},
{
"text": "after we found our way from\nthe initial state to the goal.",
"start": 2216.78,
"duration": 4.11
},
{
"text": "In addition to this, I've defined\na class called a stack frontier.",
"start": 2220.89,
"duration": 4.65
},
{
"text": "And if unfamiliar with a\nclass, a class is a way for me",
"start": 2225.54,
"duration": 3.24
},
{
"text": "to define a way to\ngenerate objects in Python.",
"start": 2228.78,
"duration": 3.15
},
{
"text": "It refers to an idea of object oriented\nprogramming where the idea here",
"start": 2231.93,
"duration": 4.14
},
{
"text": "is that I would like to\ncreate an object that is",
"start": 2236.07,
"duration": 2.67
},
{
"text": "able to store all of my Frontier Data.",
"start": 2238.74,
"duration": 2.16
},
{
"text": "And I would like to have\nfunctions, otherwise known",
"start": 2240.9,
"duration": 2.13
},
{
"text": "as methods on that object, that I\ncan use to manipulate the object.",
"start": 2243.03,
"duration": 4.38
},
{
"text": "And so what's going on here,\nif unfamiliar with the syntax,",
"start": 2247.41,
"duration": 3.72
},
{
"text": "is I have a function that\ninitially creates a frontier",
"start": 2251.13,
"duration": 3.3
},
{
"text": "that I'm going to\nrepresent using a list.",
"start": 2254.43,
"duration": 1.98
},
{
"text": "And initially my frontier is\nrepresented by the empty list.",
"start": 2256.41,
"duration": 3.4
},
{
"text": "There's nothing in my\nfrontier to begin with.",
"start": 2259.81,
"duration": 3.06
},
{
"text": "I have an add function that\nadds something to the frontier,",
"start": 2262.87,
"duration": 3.15
},
{
"text": "as by appending it to\nthe end of the list.",
"start": 2266.02,
"duration": 3.24
},
{
"text": "I have a function that checks if the\nfrontier contains a particular state.",
"start": 2269.26,
"duration": 4.14
},
{
"text": "I have an empty function that\nchecks if the frontier is empty.",
"start": 2273.4,
"duration": 2.59
},
{
"text": "If the frontier is empty, that just\nmeans the length of the frontier",
"start": 2275.99,
"duration": 2.84
},
{
"text": "is zero.",
"start": 2278.83,
"duration": 1.35
},
{
"text": "And then I have a function for\nremoving something from the frontier.",
"start": 2280.18,
"duration": 3.13
},
{
"text": "I can't remove something from the\nfrontier if the frontier is empty.",
"start": 2283.31,
"duration": 2.84
},
{
"text": "So I check for that first.",
"start": 2286.15,
"duration": 1.62
},
{
"text": "But otherwise, if the\nfrontier isn't empty,",
"start": 2287.77,
"duration": 2.94
},
{
"text": "recall that I'm implementing\nthis frontier as a stack,",
"start": 2290.71,
"duration": 3.23
},
{
"text": "a last in, first out data structure.",
"start": 2293.94,
"duration": 3.26
},
{
"text": "Which means the last thing\nI add to the frontier,",
"start": 2297.2,
"duration": 2.39
},
{
"text": "in other words, the\nlast thing in the list,",
"start": 2299.59,
"duration": 1.98
},
{
"text": "is the item that I should\nremove from this frontier.",
"start": 2301.57,
"duration": 4.18
},
{
"text": "So what you'll see here is I have\nremoved the last item of a list.",
"start": 2305.75,
"duration": 4.44
},
{
"text": "And if you index into a\nPython list with negative one,",
"start": 2310.19,
"duration": 2.97
},
{
"text": "that gets you the last item in the list.",
"start": 2313.16,
"duration": 1.89
},
{
"text": "Since zero is the first\nitem, negative one",
"start": 2315.05,
"duration": 2.58
},
{
"text": "kind of wraps around and gets\nyou to the last item in the list.",
"start": 2317.63,
"duration": 3.86
},
{
"text": "So we give that the node.",
"start": 2321.49,
"duration": 1.82
},
{
"text": "We call that node, we update the\nfrontier here on line 28 to say,",
"start": 2323.31,
"duration": 3.4
},
{
"text": "go ahead and remove that node that\nyou just removed from the frontier.",
"start": 2326.71,
"duration": 3.29
},
{
"text": "And then we return the node as a\nresult. So this class here effectively",
"start": 2330.0,
"duration": 4.86
},
{
"text": "implements the idea of a frontier.",
"start": 2334.86,
"duration": 2.19
},
{
"text": "It gives me a way to add\nsomething to a frontier and a way",
"start": 2337.05,
"duration": 2.64
},
{
"text": "to remove something from\nthe frontier as a stack.",
"start": 2339.69,
"duration": 3.44
},
{
"text": "I've also, just for good measure,\nimplemented an alternative version",
"start": 2343.13,
"duration": 3.7
},
{
"text": "of the same thing called a Q frontier.",
"start": 2346.83,
"duration": 2.67
},
{
"text": "Which, in parentheses you'll see here,\nit inherits from a stack frontier,",
"start": 2349.5,
"duration": 3.69
},
{
"text": "meaning it's going to do all the same\nthings that the stack frontier did,",
"start": 2353.19,
"duration": 3.45
},
{
"text": "except the way we remove\na node from the frontier",
"start": 2356.64,
"duration": 2.8
},
{
"text": "is going to be slightly different.",
"start": 2359.44,
"duration": 1.65
},
{
"text": "Instead of removing from the end of\nthe list the way we would in a stack,",
"start": 2361.09,
"duration": 3.15
},
{
"text": "we're instead going to remove\nfrom the beginning of the list.",
"start": 2364.24,
"duration": 2.57
},
{
"text": "self.frontierzero will get me\nthe first node in the frontier,",
"start": 2366.81,
"duration": 4.78
},
{
"text": "the first one that was added.",
"start": 2371.59,
"duration": 1.28
},
{
"text": "And that is going to be the one\nthat we return in the case of a Q.",
"start": 2372.87,
"duration": 4.68
},
{
"text": "Under here I have a definition\nof a class called maze.",
"start": 2377.55,
"duration": 2.79
},
{
"text": "This is going to handle the process\nof taking a sequence, a maze like text",
"start": 2380.34,
"duration": 4.71
},
{
"text": "file, and figuring out how to solve it.",
"start": 2385.05,
"duration": 2.35
},
{
"text": "So we'll take as input a text\nfile that looks something",
"start": 2387.4,
"duration": 3.08
},
{
"text": "like this, for example, where we see\nhash marks that are here representing",
"start": 2390.48,
"duration": 3.33
},
{
"text": "walls and I have the character A\nrepresenting the starting position,",
"start": 2393.81,
"duration": 4.02
},
{
"text": "and the character B representing\nthe ending position.",
"start": 2397.83,
"duration": 3.91
},
{
"text": "And you can take a look at the code\nfor parsing this text file right now.",
"start": 2401.74,
"duration": 3.05
},
{
"text": "That's the less interesting part.",
"start": 2404.79,
"duration": 1.53
},
{
"text": "The more interesting part\nis this solve function here,",
"start": 2406.32,
"duration": 3.06
},
{
"text": "where the solve function\nis going to figure out",
"start": 2409.38,
"duration": 2.13
},
{
"text": "how to actually get\nfrom point A to point B.",
"start": 2411.51,
"duration": 3.72
},
{
"text": "And here we see an implementation\nof the exact same idea",
"start": 2415.23,
"duration": 3.03
},
{
"text": "we saw from a moment ago.",
"start": 2418.26,
"duration": 1.64
},
{
"text": "We're going to keep\ntrack of how many states",
"start": 2419.9,
"duration": 1.84
},
{
"text": "we've explored just so we\ncan report that data later.",
"start": 2421.74,
"duration": 2.78
},
{
"text": "But I start with a node that\nrepresents just the start state.",
"start": 2424.52,
"duration": 5.25
},
{
"text": "And I start with a frontier that\nin this case is a stack frontier.",
"start": 2429.77,
"duration": 4.27
},
{
"text": "And given that I'm treating\nmy frontier as a stack,",
"start": 2434.04,
"duration": 2.13
},
{
"text": "you might imagine that the algorithm I'm\nusing here is now depth first search.",
"start": 2436.17,
"duration": 3.99
},
{
"text": "Because depth first search or DFS\nuses a stack as its data structure.",
"start": 2440.16,
"duration": 5.07
},
{
"text": "And initially, this frontier is just\ngoing to contain the start state.",
"start": 2445.23,
"duration": 5.12
},
{
"text": "We initialize an explored\nset that initially is empty.",
"start": 2450.35,
"duration": 3.04
},
{
"text": "There's nothing we've explored so far.",
"start": 2453.39,
"duration": 2.03
},
{
"text": "And now here's our loop, that notion\nof repeating something again and again.",
"start": 2455.42,
"duration": 4.56
},
{
"text": "First, we check if the frontier is empty\nby calling that empty function that we",
"start": 2459.98,
"duration": 3.78
},
{
"text": "saw the implementation of a moment ago.",
"start": 2463.76,
"duration": 2.13
},
{
"text": "And if the frontier\nis indeed empty, we'll",
"start": 2465.89,
"duration": 2.19
},
{
"text": "go ahead and raise an exception,\nor a Python error, to say, sorry.",
"start": 2468.08,
"duration": 3.57
},
{
"text": "There is no solution to this problem.",
"start": 2471.65,
"duration": 3.58
},
{
"text": "Otherwise, we'll go ahead and\nremove a node from the frontier,",
"start": 2475.23,
"duration": 3.4
},
{
"text": "as by calling frontier.remove and update\nthe number of states we've explored.",
"start": 2478.63,
"duration": 4.33
},
{
"text": "Because now we've explored\none additional state",
"start": 2482.96,
"duration": 2.4
},
{
"text": "so we say self.numexplored plus equals\none, adding one to the number of states",
"start": 2485.36,
"duration": 4.74
},
{
"text": "we've explored.",
"start": 2490.1,
"duration": 1.78
},
{
"text": "Once we remove a node\nfrom the frontier, recall",
"start": 2491.88,
"duration": 2.55
},
{
"text": "that the next step is to see whether\nor not it's the goal, the goal test.",
"start": 2494.43,
"duration": 3.9
},
{
"text": "And in the case of the maze,\nthe goal is pretty easy.",
"start": 2498.33,
"duration": 2.52
},
{
"text": "I check to see whether the state\nof the node is equal to the goal.",
"start": 2500.85,
"duration": 4.26
},
{
"text": "Initially when I set\nup the maze, I set up",
"start": 2505.11,
"duration": 1.98
},
{
"text": "this value called goal which\nis the property of the maze",
"start": 2507.09,
"duration": 2.8
},
{
"text": "so I can just check to see if\nthe node is actually the goal.",
"start": 2509.89,
"duration": 3.41
},
{
"text": "And if it is the goal,\nthen what I want to do",
"start": 2513.3,
"duration": 2.76
},
{
"text": "is backtrack my way towards\nfiguring out what actions",
"start": 2516.06,
"duration": 3.39
},
{
"text": "I took in order to get to this goal.",
"start": 2519.45,
"duration": 2.91
},
{
"text": "And how do I do that?",
"start": 2522.36,
"duration": 1.11
},
{
"text": "We'll recall that every\nnode stores its parent--",
"start": 2523.47,
"duration": 2.67
},
{
"text": "the node that came before it that\nwe used to get to this node--",
"start": 2526.14,
"duration": 2.97
},
{
"text": "and also the action used\nin order to get there.",
"start": 2529.11,
"duration": 2.58
},
{
"text": "So I can create this\nloop where I'm constantly",
"start": 2531.69,
"duration": 2.01
},
{
"text": "just looking at the parent\nof every node and keeping",
"start": 2533.7,
"duration": 3.3
},
{
"text": "track, for all of the parents, what\naction I took to get from the parent",
"start": 2537.0,
"duration": 3.6
},
{
"text": "to this.",
"start": 2540.6,
"duration": 1.29
},
{
"text": "So this loop is going to keep repeating\nthis process of looking through all",
"start": 2541.89,
"duration": 3.36
},
{
"text": "of the parent nodes until we get\nback to the initial state, which",
"start": 2545.25,
"duration": 3.42
},
{
"text": "has no parent, where node.parent\nis going to be equal to none.",
"start": 2548.67,
"duration": 4.27
},
{
"text": "As I do so, I'm going to be\nbuilding up the list of all",
"start": 2552.94,
"duration": 2.3
},
{
"text": "of the actions that I'm following\nand the list of all of the cells",
"start": 2555.24,
"duration": 2.79
},
{
"text": "that are part of the solution.",
"start": 2558.03,
"duration": 1.56
},
{
"text": "But I'll reverse them\nbecause when I build it",
"start": 2559.59,
"duration": 2.43
},
{
"text": "up going from the goal\nback to the initial state,",
"start": 2562.02,
"duration": 2.91
},
{
"text": "I'm building the sequence of actions\nfrom the goal to the initial state,",
"start": 2564.93,
"duration": 3.09
},
{
"text": "but I want to reverse them in order\nto get the sequence of actions",
"start": 2568.02,
"duration": 2.88
},
{
"text": "from the initial state to the goal.",
"start": 2570.9,
"duration": 2.73
},
{
"text": "And that is, ultimately,\ngoing to be the solution.",
"start": 2573.63,
"duration": 3.58
},
{
"text": "So all of that happens if the\ncurrent state is equal to the goal.",
"start": 2577.21,
"duration": 4.07
},
{
"text": "And otherwise, if it's\nnot the goal, well,",
"start": 2581.28,
"duration": 2.01
},
{
"text": "then I'll go ahead and add this\nstate to the explored set to say,",
"start": 2583.29,
"duration": 3.57
},
{
"text": "I've explored this state now.",
"start": 2586.86,
"duration": 1.38
},
{
"text": "No need to go back to it if I\ncome across it in the future.",
"start": 2588.24,
"duration": 3.27
},
{
"text": "And then, this logic\nhere implements the idea",
"start": 2591.51,
"duration": 3.24
},
{
"text": "of adding neighbors to the frontier.",
"start": 2594.75,
"duration": 2.07
},
{
"text": "I'm saying, look at all of my neighbors.",
"start": 2596.82,
"duration": 1.83
},
{
"text": "And I implemented a function called\nneighbors that you can take a look at.",
"start": 2598.65,
"duration": 2.88
},
{
"text": "And for each of those\nneighbors, I'm going to check,",
"start": 2601.53,
"duration": 2.16
},
{
"text": "is the state already in the frontier?",
"start": 2603.69,
"duration": 2.16
},
{
"text": "Is the state already\nin the explored set?",
"start": 2605.85,
"duration": 2.55
},
{
"text": "And if it's not in either of those, then\nI'll go ahead and add this new child",
"start": 2608.4,
"duration": 4.2
},
{
"text": "node-- this new node--",
"start": 2612.6,
"duration": 1.35
},
{
"text": "to the frontier.",
"start": 2613.95,
"duration": 1.28
},
{
"text": "So there's a fair amount of\nsyntax here, but the key here",
"start": 2615.23,
"duration": 2.38
},
{
"text": "is not to understand all\nthe nuances of the syntax,",
"start": 2617.61,
"duration": 2.31
},
{
"text": "though feel free to take a closer\nlook at this file on your own",
"start": 2619.92,
"duration": 2.82
},
{
"text": "to get a sense for how it is working.",
"start": 2622.74,
"duration": 1.92
},
{
"text": "But the key is to see how this is an\nimplementation of the same pseudocode,",
"start": 2624.66,
"duration": 3.48
},
{
"text": "the same idea that we were describing\na moment ago on the screen when we were",
"start": 2628.14,
"duration": 4.92
},
{
"text": "looking at the steps that\nwe might follow in order",
"start": 2633.06,
"duration": 2.16
},
{
"text": "to solve this kind of search problem.",
"start": 2635.22,
"duration": 2.5
},
{
"text": "So now let's actually\nsee this in action.",
"start": 2637.72,
"duration": 1.82
},
{
"text": "I'll go ahead and run maze.py\non maze1.txt, for example.",
"start": 2639.54,
"duration": 6.0
},
{
"text": "And what we'll see is here we have a\nprintout of what the maze initially",
"start": 2645.54,
"duration": 3.78
},
{
"text": "looked like.",
"start": 2649.32,
"duration": 1.05
},
{
"text": "And then here, down below,\nis after we've solved it.",
"start": 2650.37,
"duration": 2.64
},
{
"text": "We had to explore 11 states in order to\ndo it, and we found a path from A to B.",
"start": 2653.01,
"duration": 4.97
},
{
"text": "And in this program, I just happened\nto generate a graphical representation",
"start": 2657.98,
"duration": 3.13
},
{
"text": "of this, as well--",
"start": 2661.11,
"duration": 0.93
},
{
"text": "so I can open up maze.png, which\nis generated by this program--",
"start": 2662.04,
"duration": 3.36
},
{
"text": "that shows you where, in the\ndarker color here, the wall is.",
"start": 2665.4,
"duration": 3.42
},
{
"text": "Red is the initial\nstate, green is the goal,",
"start": 2668.82,
"duration": 2.04
},
{
"text": "and yellow is the path\nthat was followed.",
"start": 2670.86,
"duration": 2.07
},
{
"text": "We found a path from the\ninitial state to the goal.",
"start": 2672.93,
"duration": 4.3
},
{
"text": "But now let's take a look\nat a more sophisticated maze",
"start": 2677.23,
"duration": 2.82
},
{
"text": "to see what might happen instead.",
"start": 2680.05,
"duration": 2.07
},
{
"text": "Let's look now at maze2.txt, where\nnow here we have a much larger maze.",
"start": 2682.12,
"duration": 4.89
},
{
"text": "Again, we're trying to find our\nway from point A to point B,",
"start": 2687.01,
"duration": 3.27
},
{
"text": "but now you'll imagine that depth-first\nsearch might not be so lucky.",
"start": 2690.28,
"duration": 3.24
},
{
"text": "It might not get the\ngoal on the first try.",
"start": 2693.52,
"duration": 2.49
},
{
"text": "It might have to follow\none path then backtrack",
"start": 2696.01,
"duration": 2.55
},
{
"text": "and explore something\nelse a little bit later.",
"start": 2698.56,
"duration": 3.54
},
{
"text": "So let's try this.",
"start": 2702.1,
"duration": 1.13
},
{
"text": "Run pythonmaze.py of maze2.txt,\nthis time trying on this other maze.",
"start": 2703.23,
"duration": 5.7
},
{
"text": "And now depth-first search\nis able to find a solution.",
"start": 2708.93,
"duration": 3.21
},
{
"text": "Here, as indicated by the stars,\nis a way to get from A to B.",
"start": 2712.14,
"duration": 3.9
},
{
"text": "And we can represent this\nvisually by opening up this maze.",
"start": 2716.04,
"duration": 3.39
},
{
"text": "Here's what that maze looks like.",
"start": 2719.43,
"duration": 1.38
},
{
"text": "And highlighted in yellow, is the path\nthat was found from the initial state",
"start": 2720.81,
"duration": 4.05
},
{
"text": "to the goal.",
"start": 2724.86,
"duration": 1.44
},
{
"text": "But how many states do we have to\nexplore before we found that path?",
"start": 2726.3,
"duration": 5.04
},
{
"text": "Well, recall that, in my program, I was\nkeeping track of the number of states",
"start": 2731.34,
"duration": 3.27
},
{
"text": "that we've explored so far.",
"start": 2734.61,
"duration": 1.92
},
{
"text": "And so I can go back to the terminal\nand see that, all right, in order",
"start": 2736.53,
"duration": 3.75
},
{
"text": "to solve this problem, we had\nto explore 399 different states.",
"start": 2740.28,
"duration": 5.83
},
{
"text": "And in fact, if I make one small\nmodification to the program",
"start": 2746.11,
"duration": 2.75
},
{
"text": "and tell the program at the\nend when we output this image,",
"start": 2748.86,
"duration": 3.09
},
{
"text": "I added an argument\ncalled \"show explored\".",
"start": 2751.95,
"duration": 3.21
},
{
"text": "And if I set \"show\nexplored\" equal to true",
"start": 2755.16,
"duration": 2.82
},
{
"text": "and rerun this program pythonmaze.py\nby running it on maze2,",
"start": 2757.98,
"duration": 4.74
},
{
"text": "and then I open the maze, what you'll\nsee here is, highlighted in red,",
"start": 2762.72,
"duration": 3.6
},
{
"text": "are all of the states that had to be\nexplored to get from the initial state",
"start": 2766.32,
"duration": 4.29
},
{
"text": "to the goal.",
"start": 2770.61,
"duration": 0.9
},
{
"text": "Depth-First Search, or DFS, didn't\nfind its way to the goal right away.",
"start": 2771.51,
"duration": 3.63
},
{
"text": "It made a choice to first\nexplore this direction.",
"start": 2775.14,
"duration": 3.03
},
{
"text": "And when it explored\nthis direction, it had",
"start": 2778.17,
"duration": 1.8
},
{
"text": "to follow every conceivable\npath, all the way",
"start": 2779.97,
"duration": 2.31
},
{
"text": "to the very end, even\nthis long and winding one,",
"start": 2782.28,
"duration": 2.4
},
{
"text": "in order to realize that, you\nknow what, that's a dead end.",
"start": 2784.68,
"duration": 2.76
},
{
"text": "And instead, the program\nneeded to backtrack.",
"start": 2787.44,
"duration": 2.28
},
{
"text": "After going this direction, it\nmust have gone this direction.",
"start": 2789.72,
"duration": 2.94
},
{
"text": "It got lucky here by just\nnot choosing this path.",
"start": 2792.66,
"duration": 2.83
},
{
"text": "But it got unlucky here, exploring this\ndirection, exploring a bunch of states",
"start": 2795.49,
"duration": 3.8
},
{
"text": "that it didn't need\nto and then, likewise,",
"start": 2799.29,
"duration": 1.86
},
{
"text": "exploring all of this\ntop part of the graph",
"start": 2801.15,
"duration": 2.34
},
{
"text": "when it probably didn't\nneed to do that either.",
"start": 2803.49,
"duration": 2.83
},
{
"text": "So all in all, depth-first\nsearch here really",
"start": 2806.32,
"duration": 2.75
},
{
"text": "not performing optimally, or probably\nexploring more states than it needs to.",
"start": 2809.07,
"duration": 3.9
},
{
"text": "It finds an optimal solution,\nthe best path to the goal,",
"start": 2812.97,
"duration": 3.63
},
{
"text": "but the number of states needed\nto explore in order to do so,",
"start": 2816.6,
"duration": 2.91
},
{
"text": "the number of steps I had to\ntake, that was much higher.",
"start": 2819.51,
"duration": 3.55
},
{
"text": "So let's compare.",
"start": 2823.06,
"duration": 1.01
},
{
"text": "How would Breadth-First Search, or BFS,\ndo on this exact same maze instead?",
"start": 2824.07,
"duration": 4.99
},
{
"text": "And in order to do so,\nit's a very easy change.",
"start": 2829.06,
"duration": 2.57
},
{
"text": "The algorithm for DFS and BFS\nis identical with the exception",
"start": 2831.63,
"duration": 4.92
},
{
"text": "of what data structure we use\nto represent the frontier.",
"start": 2836.55,
"duration": 4.4
},
{
"text": "That in DFS I used a stack frontier--",
"start": 2840.95,
"duration": 2.89
},
{
"text": "last in, first out--",
"start": 2843.84,
"duration": 2.04
},
{
"text": "whereas in BFS, I'm going to\nuse a queue frontier-- first in,",
"start": 2845.88,
"duration": 4.5
},
{
"text": "first out, where the first\nthing I add to the frontier",
"start": 2850.38,
"duration": 2.88
},
{
"text": "is the first thing that I remove.",
"start": 2853.26,
"duration": 2.31
},
{
"text": "So I'll go back to the terminal,\nrerun this program on the same maze,",
"start": 2855.57,
"duration": 5.1
},
{
"text": "and now you'll see that the number of\nstates we had to explore was only 77,",
"start": 2860.67,
"duration": 4.61
},
{
"text": "as compared to almost 400 when\nwe used depth-first search.",
"start": 2865.28,
"duration": 3.86
},
{
"text": "And we can see exactly why.",
"start": 2869.14,
"duration": 1.19
},
{
"text": "We can see what happened if we open\nup maze.png now and take a look.",
"start": 2870.33,
"duration": 4.65
},
{
"text": "Again, yellow highlight is the solution\nthat breath-first search found,",
"start": 2874.98,
"duration": 4.56
},
{
"text": "which, incidentally, is the same\nsolution that depth-first search found.",
"start": 2879.54,
"duration": 3.48
},
{
"text": "They're both finding the best solution,\nbut notice all the white unexplored",
"start": 2883.02,
"duration": 4.09
},
{
"text": "cells.",
"start": 2887.11,
"duration": 0.5
},
{
"text": "There was much fewer states\nthat needed to be explored",
"start": 2887.61,
"duration": 3.09
},
{
"text": "in order to make our way to the goal\nbecause breadth-first search operates",
"start": 2890.7,
"duration": 4.28
},
{
"text": "a little more shallowly.",
"start": 2894.98,
"duration": 1.0
},
{
"text": "It's exploring things that\nare close to the initial state",
"start": 2895.98,
"duration": 3.09
},
{
"text": "without exploring things\nthat are further away.",
"start": 2899.07,
"duration": 3.1
},
{
"text": "So if the goal is not too far\naway, then breadth-first search",
"start": 2902.17,
"duration": 3.05
},
{
"text": "can actually behave quite\neffectively on a maze that",
"start": 2905.22,
"duration": 2.73
},
{
"text": "looks a little something like this.",
"start": 2907.95,
"duration": 2.92
},
{
"text": "Now, in this case, both BFS and DFS\nended up finding the same solution,",
"start": 2910.87,
"duration": 4.88
},
{
"text": "but that won't always be the case.",
"start": 2915.75,
"duration": 1.93
},
{
"text": "And in fact, let's take a look at one\nmore example, for instance, maze3.txt.",
"start": 2917.68,
"duration": 5.71
},
{
"text": "In maze3.txt, notice that\nhere there are multiple ways",
"start": 2923.39,
"duration": 3.58
},
{
"text": "that you could get from A to B.",
"start": 2926.97,
"duration": 2.22
},
{
"text": "It's a relatively small maze,\nbut let's look at what happens.",
"start": 2929.19,
"duration": 2.85
},
{
"text": "If I use-- and I'll go ahead\nand turn off \"show explored\" so",
"start": 2932.04,
"duration": 3.63
},
{
"text": "we just see the solution.",
"start": 2935.67,
"duration": 2.72
},
{
"text": "If I use BFS, breadth-first\nsearch, to solve maze3.txt,",
"start": 2938.39,
"duration": 6.2
},
{
"text": "well, then we find a solution.",
"start": 2944.59,
"duration": 1.83
},
{
"text": "And if I open up the maze, here's\nthe solution that we found.",
"start": 2946.42,
"duration": 3.11
},
{
"text": "It is the optimal one.",
"start": 2949.53,
"duration": 1.08
},
{
"text": "With just four steps, we can\nget from the initial state",
"start": 2950.61,
"duration": 3.09
},
{
"text": "to what the goal happens to be.",
"start": 2953.7,
"duration": 3.39
},
{
"text": "But what happens if we try to use,\ndepth-first search, or DFS, instead?",
"start": 2957.09,
"duration": 4.8
},
{
"text": "Well, again, I'll go back up to my queue\nfrontier, where queue frontier means",
"start": 2961.89,
"duration": 4.65
},
{
"text": "that we're using breadth-first search.",
"start": 2966.54,
"duration": 2.31
},
{
"text": "And I'll change it to a stack\nfrontier, which means that now we'll",
"start": 2968.85,
"duration": 3.27
},
{
"text": "be using depth-first search.",
"start": 2972.12,
"duration": 2.73
},
{
"text": "I'll rerun Pythonmaze.py.",
"start": 2974.85,
"duration": 3.06
},
{
"text": "And now you'll see that\nwe find a solution,",
"start": 2977.91,
"duration": 2.58
},
{
"text": "but it is not the optimal solution.",
"start": 2980.49,
"duration": 2.49
},
{
"text": "This, instead, is what\nour algorithm finds.",
"start": 2982.98,
"duration": 2.66
},
{
"text": "And maybe depth-first search\nwould have found this solution.",
"start": 2985.64,
"duration": 2.5
},
{
"text": "It's possible, but it's not\nguaranteed, that if we just",
"start": 2988.14,
"duration": 3.27
},
{
"text": "happen to be unlucky, if we choose\nthis state instead of that state,",
"start": 2991.41,
"duration": 3.87
},
{
"text": "then depth-first search might\nfind a longer route to get",
"start": 2995.28,
"duration": 3.0
},
{
"text": "from the initial state to the goal.",
"start": 2998.28,
"duration": 2.98
},
{
"text": "So we do see some trade-offs here\nwhere depth-first search might not",
"start": 3001.26,
"duration": 3.02
},
{
"text": "find the optimal solution.",
"start": 3004.28,
"duration": 1.96
},
{
"text": "So at that point, it seems like\nbreadth-first search is pretty good.",
"start": 3006.24,
"duration": 2.84
},
{
"text": "Is that the best we can do, where it's\ngoing to find us the optimal solution",
"start": 3009.08,
"duration": 3.84
},
{
"text": "and we don't have to worry\nabout situations where",
"start": 3012.92,
"duration": 2.7
},
{
"text": "we might end up finding a longer path to\nthe solution than what actually exists?",
"start": 3015.62,
"duration": 4.65
},
{
"text": "Where the goal is far away\nfrom the initial state--",
"start": 3020.27,
"duration": 2.88
},
{
"text": "and we might have to take lots of steps\nin order to get from the initial state",
"start": 3023.15,
"duration": 3.63
},
{
"text": "to the goal--",
"start": 3026.78,
"duration": 0.84
},
{
"text": "what ended up happening, is that\nthis algorithm, BFS, ended up",
"start": 3027.62,
"duration": 3.6
},
{
"text": "exploring basically the entire graph,\nhaving to go through the entire maze",
"start": 3031.22,
"duration": 4.26
},
{
"text": "in order to find its way from the\ninitial state to the goal state.",
"start": 3035.48,
"duration": 4.32
},
{
"text": "What we'd ultimately\nlike is for our algorithm",
"start": 3039.8,
"duration": 2.16
},
{
"text": "to be a little bit more intelligent.",
"start": 3041.96,
"duration": 2.36
},
{
"text": "And now what would it\nmean for our algorithm",
"start": 3044.32,
"duration": 1.99
},
{
"text": "to be a little bit more\nintelligent, in this case?",
"start": 3046.31,
"duration": 3.51
},
{
"text": "Well, let's look back to where\nbreadth-first search might",
"start": 3049.82,
"duration": 2.67
},
{
"text": "have been able to make\na different decision",
"start": 3052.49,
"duration": 1.8
},
{
"text": "and consider human intuition\nin this process, as well.",
"start": 3054.29,
"duration": 3.28
},
{
"text": "Like, what might a human do when solving\nthis maze that is different than what",
"start": 3057.57,
"duration": 4.07
},
{
"text": "BFS ultimately chose to do?",
"start": 3061.64,
"duration": 2.85
},
{
"text": "Well, the very first\ndecision point that BFS made",
"start": 3064.49,
"duration": 3.12
},
{
"text": "was right here, when it\nmade five steps and ended up",
"start": 3067.61,
"duration": 3.81
},
{
"text": "in a position where it\nhad a fork in the road.",
"start": 3071.42,
"duration": 1.92
},
{
"text": "It could either go left\nor it could go right.",
"start": 3073.34,
"duration": 1.87
},
{
"text": "In these initial couple of\nsteps, there was no choice.",
"start": 3075.21,
"duration": 2.25
},
{
"text": "There was only one action that could\nbe taken from each of those states.",
"start": 3077.46,
"duration": 3.33
},
{
"text": "And so the search algorithm\ndid the only thing",
"start": 3080.79,
"duration": 2.24
},
{
"text": "that any search\nalgorithm could do, which",
"start": 3083.03,
"duration": 1.98
},
{
"text": "is keep following that\nstate after the next state.",
"start": 3085.01,
"duration": 3.39
},
{
"text": "But this decision point is where\nthings get a little bit interesting.",
"start": 3088.4,
"duration": 3.27
},
{
"text": "Depth-first search, that very first\nsearch algorithm we looked at,",
"start": 3091.67,
"duration": 3.18
},
{
"text": "chose to say, let's pick one\npath and exhaust that path,",
"start": 3094.85,
"duration": 3.9
},
{
"text": "see if anything that way has\nthe goal, and if not, then let's",
"start": 3098.75,
"duration": 3.81
},
{
"text": "try the other way.",
"start": 3102.56,
"duration": 1.24
},
{
"text": "Breadth-first search took the\nalternative approach of saying,",
"start": 3103.8,
"duration": 2.78
},
{
"text": "you know what?",
"start": 3106.58,
"duration": 0.63
},
{
"text": "Let's explore things that are shallow,\nclose to us first, look left and right,",
"start": 3107.21,
"duration": 4.3
},
{
"text": "then back left and back\nright, so on and so forth,",
"start": 3111.51,
"duration": 2.45
},
{
"text": "alternating between our options in\nthe hopes of finding something nearby.",
"start": 3113.96,
"duration": 4.68
},
{
"text": "But ultimately, what might a human do\nif confronted with a situation like this",
"start": 3118.64,
"duration": 3.75
},
{
"text": "of go left or go right?",
"start": 3122.39,
"duration": 1.86
},
{
"text": "Well, a human might visually\nsee that, all right,",
"start": 3124.25,
"duration": 2.76
},
{
"text": "I'm trying to get to state B, which\nis way up there, and going right just",
"start": 3127.01,
"duration": 4.35
},
{
"text": "feels like it's closer to the goal.",
"start": 3131.36,
"duration": 2.0
},
{
"text": "Like, it feels like\ngoing right should be",
"start": 3133.36,
"duration": 1.78
},
{
"text": "better than going left\nbecause I'm making progress",
"start": 3135.14,
"duration": 2.73
},
{
"text": "towards getting to that goal.",
"start": 3137.87,
"duration": 1.51
},
{
"text": "Now, of course, there are a couple\nof assumptions that I'm making here.",
"start": 3139.38,
"duration": 2.96
},
{
"text": "I'm making the assumption\nthat we can represent",
"start": 3142.34,
"duration": 2.88
},
{
"text": "this grid as, like, a\ntwo-dimensional grid,",
"start": 3145.22,
"duration": 1.86
},
{
"text": "where I know the\ncoordinates of everything.",
"start": 3147.08,
"duration": 1.8
},
{
"text": "I know that A is in coordinate 0,0,\nand B is in some other coordinate pair.",
"start": 3148.88,
"duration": 5.06
},
{
"text": "And I know what coordinate I'm at now,\nso I can calculate that, yeah, going",
"start": 3153.94,
"duration": 3.13
},
{
"text": "this way, that is closer to the goal.",
"start": 3157.07,
"duration": 2.35
},
{
"text": "And that might be a reasonable\nassumption for some types of search",
"start": 3159.42,
"duration": 2.75
},
{
"text": "problems but maybe not in others.",
"start": 3162.17,
"duration": 1.89
},
{
"text": "But for now, we'll go\nahead and assume that--",
"start": 3164.06,
"duration": 2.61
},
{
"text": "that I know what my current coordinate\npair and I know the coordinate x,y",
"start": 3166.67,
"duration": 4.92
},
{
"text": "of the goal that I'm trying to get to.",
"start": 3171.59,
"duration": 2.09
},
{
"text": "And in this situation,\nI'd like an algorithm that",
"start": 3173.68,
"duration": 2.86
},
{
"text": "is a little bit more\nintelligent and somehow knows",
"start": 3176.54,
"duration": 2.55
},
{
"text": "that I should be making\nprogress towards the goal,",
"start": 3179.09,
"duration": 3.09
},
{
"text": "and this is probably the way\nto do that because, in a maze,",
"start": 3182.18,
"duration": 3.54
},
{
"text": "moving in the coordinate\ndirection of the goal",
"start": 3185.72,
"duration": 2.76
},
{
"text": "is usually, though not\nalways, a good thing.",
"start": 3188.48,
"duration": 3.4
},
{
"text": "And so here we draw a distinction\nbetween two different types of search",
"start": 3191.88,
"duration": 2.96
},
{
"text": "algorithms-- uninformed\nsearch and informed search.",
"start": 3194.84,
"duration": 4.2
},
{
"text": "Uninformed search algorithms\nare algorithms like DFS and BFS,",
"start": 3199.04,
"duration": 4.29
},
{
"text": "the two algorithms\nthat we just looked at,",
"start": 3203.33,
"duration": 1.87
},
{
"text": "which are search strategies that don't\nuse any problem specific knowledge",
"start": 3205.2,
"duration": 4.34
},
{
"text": "to be able to solve the problem.",
"start": 3209.54,
"duration": 1.86
},
{
"text": "DFS and BFS didn't really\ncare about the structure",
"start": 3211.4,
"duration": 3.21
},
{
"text": "of the maze or anything about\nthe way that a maze is in order",
"start": 3214.61,
"duration": 3.66
},
{
"text": "to solve the problem.",
"start": 3218.27,
"duration": 1.06
},
{
"text": "They just look at the actions available\nand choose from those actions,",
"start": 3219.33,
"duration": 3.16
},
{
"text": "and it doesn't matter whether if\nit's a maze or some other problem.",
"start": 3222.49,
"duration": 2.8
},
{
"text": "The solution, or the way that\nit tries to solve the problem,",
"start": 3225.29,
"duration": 2.73
},
{
"text": "is really fundamentally\ngoing to be the same.",
"start": 3228.02,
"duration": 3.3
},
{
"text": "What we're going to\ntake a look at now is",
"start": 3231.32,
"duration": 1.71
},
{
"text": "an improvement upon uninformed search.",
"start": 3233.03,
"duration": 2.34
},
{
"text": "We're going to take a\nlook at informed search.",
"start": 3235.37,
"duration": 2.46
},
{
"text": "Informed search are going\nto be search strategies that",
"start": 3237.83,
"duration": 2.61
},
{
"text": "use knowledge specific to the problem\nto be able to better find a solution.",
"start": 3240.44,
"duration": 5.36
},
{
"text": "And in the case of a maze,\nthis problem specific knowledge",
"start": 3245.8,
"duration": 3.47
},
{
"text": "is something like, if\nI'm going to square",
"start": 3249.27,
"duration": 2.0
},
{
"text": "that is geographically\ncloser to the goal, that",
"start": 3251.27,
"duration": 3.15
},
{
"text": "is better than being in a square\nthat is geographically further away.",
"start": 3254.42,
"duration": 5.28
},
{
"text": "And this is something we can only\nknow by thinking about this problem",
"start": 3259.7,
"duration": 3.63
},
{
"text": "and reasoning about what knowledge\nmight be helpful for our AI agent",
"start": 3263.33,
"duration": 4.49
},
{
"text": "to know a little something about.",
"start": 3267.82,
"duration": 2.24
},
{
"text": "There are a number of different\ntypes of informed search.",
"start": 3270.06,
"duration": 2.38
},
{
"text": "Specifically, first, we're going\nto look at a particular type",
"start": 3272.44,
"duration": 2.7
},
{
"text": "of search algorithm called\ngreedy best-first search.",
"start": 3275.14,
"duration": 4.41
},
{
"text": "Greedy Best-First Search,\noften abbreviated GBFS,",
"start": 3279.55,
"duration": 3.15
},
{
"text": "is a search algorithm that, instead\nof expanding the deepest node,",
"start": 3282.7,
"duration": 3.27
},
{
"text": "like DFS, or the\nshallowest node, like BFS,",
"start": 3285.97,
"duration": 3.69
},
{
"text": "this algorithm is always\ngoing to expand the node",
"start": 3289.66,
"duration": 2.73
},
{
"text": "that it thinks is closest to the goal.",
"start": 3292.39,
"duration": 3.6
},
{
"text": "Now, the search algorithm isn't going to\nknow for sure whether it is the closest",
"start": 3295.99,
"duration": 3.66
},
{
"text": "thing to the goal, because if we\nknew what was closest to the goal",
"start": 3299.65,
"duration": 3.0
},
{
"text": "all the time, then we would\nalready have a solution.",
"start": 3302.65,
"duration": 2.45
},
{
"text": "Like, the knowledge of\nwhat is close to the goal,",
"start": 3305.1,
"duration": 2.05
},
{
"text": "we could just follow those steps in\norder to get from the initial position",
"start": 3307.15,
"duration": 3.42
},
{
"text": "to the solution.",
"start": 3310.57,
"duration": 1.3
},
{
"text": "But if we don't know the solution--\nmeaning we don't know exactly",
"start": 3311.87,
"duration": 2.78
},
{
"text": "what's closest to the goal--",
"start": 3314.65,
"duration": 1.8
},
{
"text": "instead, we can use\nan estimate of what's",
"start": 3316.45,
"duration": 2.67
},
{
"text": "closest to the goal, otherwise\nknown as a heuristic--",
"start": 3319.12,
"duration": 3.06
},
{
"text": "just some way of estimating whether\nor not we're close to the goal.",
"start": 3322.18,
"duration": 3.73
},
{
"text": "And we'll do so using a heuristic\nfunction, conventionally called h(n),",
"start": 3325.91,
"duration": 3.89
},
{
"text": "that takes a state of input and\nreturns our estimate of how close we",
"start": 3329.8,
"duration": 4.74
},
{
"text": "are to the goal.",
"start": 3334.54,
"duration": 2.32
},
{
"text": "So what might this\nheuristic function actually",
"start": 3336.86,
"duration": 2.18
},
{
"text": "look like in the case of\na maze-solving algorithm?",
"start": 3339.04,
"duration": 3.06
},
{
"text": "Where we're trying to solve a maze,\nwhat does a heuristic look like?",
"start": 3342.1,
"duration": 3.39
},
{
"text": "Well, the heuristic needs to answer\na question, like between these two",
"start": 3345.49,
"duration": 3.42
},
{
"text": "cells, C and D, which one is better?",
"start": 3348.91,
"duration": 2.86
},
{
"text": "Which one would I rather be in if I'm\ntrying to find my way to the goal?",
"start": 3351.77,
"duration": 3.87
},
{
"text": "Well, any human could probably look\nat this and tell you, you know what?",
"start": 3355.64,
"duration": 3.0
},
{
"text": "D looks like it's better.",
"start": 3358.64,
"duration": 1.64
},
{
"text": "Even if the maze is a convoluted and\nyou haven't thought about all the walls,",
"start": 3360.28,
"duration": 3.27
},
{
"text": "D is probably better.",
"start": 3363.55,
"duration": 1.86
},
{
"text": "And why is D better?",
"start": 3365.41,
"duration": 1.3
},
{
"text": "Well, because if you ignore the\nwall-- let's just pretend the walls",
"start": 3366.71,
"duration": 3.02
},
{
"text": "don't exist for a moment and\nrelax the problem, so to speak--",
"start": 3369.73,
"duration": 4.56
},
{
"text": "D, just in terms of coordinate\npairs, is closer to this goal.",
"start": 3374.29,
"duration": 4.38
},
{
"text": "It's fewer steps that I would\nneed to take to get to the goal,",
"start": 3378.67,
"duration": 3.03
},
{
"text": "as compared to C, even\nif you ignore the walls.",
"start": 3381.7,
"duration": 2.46
},
{
"text": "If you just know the x,y coordinate of\nC, and the x,y coordinate of the goal,",
"start": 3384.16,
"duration": 5.0
},
{
"text": "and likewise, you know\nthe x,y coordinate of D,",
"start": 3389.16,
"duration": 2.29
},
{
"text": "you can calculate that D, just\ngeographically, ignoring the walls,",
"start": 3391.45,
"duration": 4.32
},
{
"text": "looks like it's better.",
"start": 3395.77,
"duration": 1.34
},
{
"text": "And so this is the heuristic\nfunction that we're going to use,",
"start": 3397.11,
"duration": 2.59
},
{
"text": "and it's something called the\nManhattan distance, one specific type",
"start": 3399.7,
"duration": 3.18
},
{
"text": "of heuristic, where the heuristic\nis, how many squares vertically",
"start": 3402.88,
"duration": 3.81
},
{
"text": "and horizontally and then\nleft to right-- so not",
"start": 3406.69,
"duration": 2.58
},
{
"text": "allowing myself to go diagonally, just\neither up or right or left or down.",
"start": 3409.27,
"duration": 4.05
},
{
"text": "How many steps do I need to take to get\nfrom each of these cells to the goal?",
"start": 3413.32,
"duration": 4.71
},
{
"text": "Well, as it turns out, D is much closer.",
"start": 3418.03,
"duration": 2.88
},
{
"text": "There are fewer steps.",
"start": 3420.91,
"duration": 0.96
},
{
"text": "It only needs to take six steps\nin order to get to that goal.",
"start": 3421.87,
"duration": 3.87
},
{
"text": "Again here ignoring the walls.",
"start": 3425.74,
"duration": 1.93
},
{
"text": "We've relaxed the problem a little bit.",
"start": 3427.67,
"duration": 2.12
},
{
"text": "We're just concerned\nwith, if you do the math,",
"start": 3429.79,
"duration": 2.72
},
{
"text": "subtract the x values\nfrom each other and the y",
"start": 3432.51,
"duration": 1.96
},
{
"text": "values from each other, what is our\nestimate of how far we are away?",
"start": 3434.47,
"duration": 3.54
},
{
"text": "We can estimate that D is\ncloser to the goal than C is.",
"start": 3438.01,
"duration": 4.96
},
{
"text": "And so now we have an approach.",
"start": 3442.97,
"duration": 1.83
},
{
"text": "We have a way of picking which\nnode to remove from the frontier.",
"start": 3444.8,
"duration": 3.09
},
{
"text": "And at each stage in\nour algorithm, we're",
"start": 3447.89,
"duration": 1.98
},
{
"text": "going to remove a node\nfrom the frontier.",
"start": 3449.87,
"duration": 1.71
},
{
"text": "We're going to explore the\nnode, if it has the smallest",
"start": 3451.58,
"duration": 3.24
},
{
"text": "value for this heuristic\nfunction, if it has the smallest",
"start": 3454.82,
"duration": 3.15
},
{
"text": "Manhattan distance to the goal.",
"start": 3457.97,
"duration": 2.77
},
{
"text": "And so what would this\nactually look like?",
"start": 3460.74,
"duration": 1.77
},
{
"text": "Well, let me first label\nthis graph, label this maze,",
"start": 3462.51,
"duration": 2.81
},
{
"text": "with a number representing\nthe value of this heuristic",
"start": 3465.32,
"duration": 2.73
},
{
"text": "function, the value of the Manhattan\ndistance from any of these cells.",
"start": 3468.05,
"duration": 3.81
},
{
"text": "So from this cell, for example,\nwere one away from the goal.",
"start": 3471.86,
"duration": 3.21
},
{
"text": "From this cell, were\ntwo away from the goal.",
"start": 3475.07,
"duration": 1.85
},
{
"text": "Three away, four away.",
"start": 3476.92,
"duration": 1.48
},
{
"text": "Here we're five away, because we have\nto go one to the right and then four up.",
"start": 3478.4,
"duration": 3.81
},
{
"text": "From somewhere like here,\nthe Manhattan distance is 2.",
"start": 3482.21,
"duration": 3.64
},
{
"text": "We're only two squares\naway from the goal,",
"start": 3485.85,
"duration": 2.48
},
{
"text": "geographically, even\nthough in practices we're",
"start": 3488.33,
"duration": 2.4
},
{
"text": "going to have to take a longer\npath, but we don't know that yet.",
"start": 3490.73,
"duration": 3.18
},
{
"text": "The heuristic is just\nsome easy way to estimate",
"start": 3493.91,
"duration": 2.85
},
{
"text": "how far we are away from the goal.",
"start": 3496.76,
"duration": 1.62
},
{
"text": "And maybe our heuristic\nis overly optimistic.",
"start": 3498.38,
"duration": 3.0
},
{
"text": "It thinks that, yeah,\nwe're only two steps away,",
"start": 3501.38,
"duration": 2.28
},
{
"text": "when in practice, when you consider\nthe walls, it might be more steps.",
"start": 3503.66,
"duration": 3.79
},
{
"text": "So the important thing here is that\nthe heuristic isn't a guarantee",
"start": 3507.45,
"duration": 3.95
},
{
"text": "of how many steps it's going to take.",
"start": 3511.4,
"duration": 1.89
},
{
"text": "It is estimating.",
"start": 3513.29,
"duration": 1.62
},
{
"text": "It's an attempt at\ntrying to approximate.",
"start": 3514.91,
"duration": 2.01
},
{
"text": "And it does seem generally the case\nthat the squares that look closer",
"start": 3516.92,
"duration": 3.84
},
{
"text": "to the goal have smaller values\nfor the heuristic function",
"start": 3520.76,
"duration": 3.15
},
{
"text": "than squares that are further away.",
"start": 3523.91,
"duration": 3.05
},
{
"text": "So now, using greedy best-first search,\nwhat might this algorithm actually do?",
"start": 3526.96,
"duration": 5.15
},
{
"text": "Well, again, for these first five\nsteps, there's not much of a choice.",
"start": 3532.11,
"duration": 3.16
},
{
"text": "We started this initial state,\nA. And we say, all right.",
"start": 3535.27,
"duration": 2.34
},
{
"text": "We have to explore these five states.",
"start": 3537.61,
"duration": 2.69
},
{
"text": "But now we have a decision point.",
"start": 3540.3,
"duration": 1.59
},
{
"text": "Now we have a choice between\ngoing left and going right.",
"start": 3541.89,
"duration": 2.7
},
{
"text": "And before, when DFS and BFS would\njust pick arbitrarily because it just",
"start": 3544.59,
"duration": 4.17
},
{
"text": "depends on the order you throw\nthese two nodes into the frontier--",
"start": 3548.76,
"duration": 2.85
},
{
"text": "and we didn't specify what order you put\nthem into the frontier, only the order",
"start": 3551.61,
"duration": 3.63
},
{
"text": "you take them out.",
"start": 3555.24,
"duration": 2.01
},
{
"text": "Here we can look at 13 and\n11 and say that, all right,",
"start": 3557.25,
"duration": 3.33
},
{
"text": "this square is a distance\nof 11 away from the goal,",
"start": 3560.58,
"duration": 4.05
},
{
"text": "according to our heuristic,\naccording to our estimate.",
"start": 3564.63,
"duration": 2.67
},
{
"text": "And this one we estimate to\nbe 13 away from the goal.",
"start": 3567.3,
"duration": 4.26
},
{
"text": "So between those two options,\nbetween these two choices,",
"start": 3571.56,
"duration": 3.09
},
{
"text": "I'd rather have the 11.",
"start": 3574.65,
"duration": 1.5
},
{
"text": "I'd rather be 11 steps away from\nthe goal, so I'll go to the right.",
"start": 3576.15,
"duration": 4.09
},
{
"text": "We're able to make an informed decision\nbecause we know a little something more",
"start": 3580.24,
"duration": 4.1
},
{
"text": "about this problem.",
"start": 3584.34,
"duration": 1.33
},
{
"text": "So then we keep following 10, 9, 8--",
"start": 3585.67,
"duration": 2.15
},
{
"text": "between the two sevens.",
"start": 3587.82,
"duration": 1.65
},
{
"text": "We don't really have much of\na way to know between those.",
"start": 3589.47,
"duration": 2.38
},
{
"text": "So then we do just have to\nmake an arbitrary choice.",
"start": 3591.85,
"duration": 2.03
},
{
"text": "And you know what?",
"start": 3593.88,
"duration": 0.75
},
{
"text": "Maybe we choose wrong.",
"start": 3594.63,
"duration": 0.93
},
{
"text": "But that's OK because now we can still\nsay, all right, let's try this seven.",
"start": 3595.56,
"duration": 4.55
},
{
"text": "We say seven, six.",
"start": 3600.11,
"duration": 1.71
},
{
"text": "We have to make this choice\neven though it increases",
"start": 3601.82,
"duration": 2.17
},
{
"text": "the value of the heuristic function.",
"start": 3603.99,
"duration": 1.65
},
{
"text": "But now we have another decision\npoint between six and eight.",
"start": 3605.64,
"duration": 3.33
},
{
"text": "And between those two--",
"start": 3608.97,
"duration": 1.32
},
{
"text": "and really, we're also considering\nthe 13, but that's much higher.",
"start": 3610.29,
"duration": 3.09
},
{
"text": "Between six, eight,\nand 13, well, the six",
"start": 3613.38,
"duration": 2.82
},
{
"text": "is the smallest value, so\nwe'd rather take the six.",
"start": 3616.2,
"duration": 2.7
},
{
"text": "We're able to make an informed decision\nthat going this way to the right",
"start": 3618.9,
"duration": 3.54
},
{
"text": "is probably better than going that way.",
"start": 3622.44,
"duration": 2.4
},
{
"text": "So we turn this way.",
"start": 3624.84,
"duration": 0.84
},
{
"text": "We go to five.",
"start": 3625.68,
"duration": 1.17
},
{
"text": "And now we find a decision\npoint where we'll actually",
"start": 3626.85,
"duration": 2.34
},
{
"text": "make a decision that we\nmight not want to make,",
"start": 3629.19,
"duration": 2.01
},
{
"text": "but there's unfortunately not\ntoo much of a way around this.",
"start": 3631.2,
"duration": 3.12
},
{
"text": "We see four and six.",
"start": 3634.32,
"duration": 1.35
},
{
"text": "Four looks closer to the goal, right?",
"start": 3635.67,
"duration": 1.95
},
{
"text": "It's going up, and the\ngoal is further up.",
"start": 3637.62,
"duration": 2.55
},
{
"text": "So we end up taking that route, which\nultimately leads us to a dead end.",
"start": 3640.17,
"duration": 3.54
},
{
"text": "But that's OK because we can still\nsay, all right, now let's try the six,",
"start": 3643.71,
"duration": 3.25
},
{
"text": "and now follow this route that will\nultimately lead us to the goal.",
"start": 3646.96,
"duration": 4.42
},
{
"text": "And so this now is how greedy\nbest-first search might",
"start": 3651.38,
"duration": 3.13
},
{
"text": "try to approach this\nproblem, by saying whenever",
"start": 3654.51,
"duration": 2.25
},
{
"text": "we have a decision between multiple\nnodes that we could explore,",
"start": 3656.76,
"duration": 3.3
},
{
"text": "let's explore the node that\nhas the smallest value of h(n),",
"start": 3660.06,
"duration": 4.23
},
{
"text": "this heuristic function that is\nestimating how far I have to go.",
"start": 3664.29,
"duration": 4.92
},
{
"text": "And it just so happens\nthat, in this case,",
"start": 3669.21,
"duration": 1.8
},
{
"text": "we end up doing better, in terms of the\nnumber of states we needed to explore,",
"start": 3671.01,
"duration": 4.0
},
{
"text": "than BFS needed to.",
"start": 3675.01,
"duration": 1.62
},
{
"text": "BFS explored all of this\nsection and all of that section.",
"start": 3676.63,
"duration": 3.32
},
{
"text": "But we were able to eliminate\nthat by taking advantage",
"start": 3679.95,
"duration": 2.67
},
{
"text": "of this heuristic, this\nknowledge about how close we",
"start": 3682.62,
"duration": 3.42
},
{
"text": "are to the goal or some\nestimate of that idea.",
"start": 3686.04,
"duration": 4.29
},
{
"text": "So this seems much better.",
"start": 3690.33,
"duration": 1.12
},
{
"text": "So wouldn't we always\nprefer an algorithm",
"start": 3691.45,
"duration": 1.7
},
{
"text": "like this over an algorithm\nlike breadth-first search?",
"start": 3693.15,
"duration": 3.72
},
{
"text": "Well, maybe.",
"start": 3696.87,
"duration": 0.98
},
{
"text": "One thing to take into\nconsideration is that we",
"start": 3697.85,
"duration": 1.96
},
{
"text": "need to come up with a good heuristic.",
"start": 3699.81,
"duration": 2.22
},
{
"text": "How good the heuristic is is going\nto affect how good this algorithm is.",
"start": 3702.03,
"duration": 3.69
},
{
"text": "And coming up with a good heuristic\ncan oftentimes be challenging.",
"start": 3705.72,
"duration": 4.02
},
{
"text": "But the other thing\nto consider is to ask",
"start": 3709.74,
"duration": 1.71
},
{
"text": "the question, just as we did\nwith the prior two algorithms,",
"start": 3711.45,
"duration": 3.18
},
{
"text": "is this algorithm optimal?",
"start": 3714.63,
"duration": 1.95
},
{
"text": "Will it always find the shortest path\nfrom the initial state to the goal?",
"start": 3716.58,
"duration": 5.7
},
{
"text": "And to answer that question, let's take\na look at this example for a moment.",
"start": 3722.28,
"duration": 3.9
},
{
"text": "Take a look at this example.",
"start": 3726.18,
"duration": 1.39
},
{
"text": "Again, we're trying to get\nfrom A to B, and again, I've",
"start": 3727.57,
"duration": 2.6
},
{
"text": "labeled each of the cells\nwith their Manhattan distance",
"start": 3730.17,
"duration": 3.09
},
{
"text": "from the goal, the number of\nsquares up and to the right",
"start": 3733.26,
"duration": 3.06
},
{
"text": "you would need to travel in order\nto get from that square to the goal.",
"start": 3736.32,
"duration": 4.18
},
{
"text": "And let's think about, would\ngreedy best-first search",
"start": 3740.5,
"duration": 2.9
},
{
"text": "that always picks the smallest number\nend up finding the optimal solution?",
"start": 3743.4,
"duration": 6.0
},
{
"text": "What is the shortest solution,\nand would this algorithm find it?",
"start": 3749.4,
"duration": 4.12
},
{
"text": "And the important thing to realize is\nthat right here is the decision point.",
"start": 3753.52,
"duration": 4.67
},
{
"text": "We're estimate to be\n12 away from the goal.",
"start": 3758.19,
"duration": 2.52
},
{
"text": "And we have two choices.",
"start": 3760.71,
"duration": 1.6
},
{
"text": "We can go to the left, which we\nestimate to be 13 away from the goal,",
"start": 3762.31,
"duration": 3.38
},
{
"text": "or we can go up, where we estimate\nit to be 11 away from the goal.",
"start": 3765.69,
"duration": 4.02
},
{
"text": "And between those two, greedy\nbest-first search is going to say,",
"start": 3769.71,
"duration": 3.93
},
{
"text": "the 11 looks better than the 13.",
"start": 3773.64,
"duration": 3.36
},
{
"text": "And in doing so, greedy\nbest-first search",
"start": 3777.0,
"duration": 2.54
},
{
"text": "will end up finding\nthis path to the goal.",
"start": 3779.54,
"duration": 3.28
},
{
"text": "But it turns out this\npath is not optimal.",
"start": 3782.82,
"duration": 2.16
},
{
"text": "There is a way to get to\nthe goal using fewer steps.",
"start": 3784.98,
"duration": 2.49
},
{
"text": "And it's actually this way, this way\nthat ultimately involved fewer steps,",
"start": 3787.47,
"duration": 4.95
},
{
"text": "even though it meant at this\nmoment choosing the worst",
"start": 3792.42,
"duration": 3.48
},
{
"text": "option between the two-- or what we\nestimated to be the worst option, based",
"start": 3795.9,
"duration": 3.87
},
{
"text": "on the heretics.",
"start": 3799.77,
"duration": 1.38
},
{
"text": "And so this is what we mean\nby this is a greedy algorithm.",
"start": 3801.15,
"duration": 2.76
},
{
"text": "It's making the best decision, locally.",
"start": 3803.91,
"duration": 2.55
},
{
"text": "At this decision point,\nit looks like it's better",
"start": 3806.46,
"duration": 2.34
},
{
"text": "to go here than it is to go to the 13.",
"start": 3808.8,
"duration": 2.65
},
{
"text": "But in the big picture, it's\nnot necessarily optimal,",
"start": 3811.45,
"duration": 2.59
},
{
"text": "that it might find a solution\nwhen in actuality there",
"start": 3814.04,
"duration": 3.16
},
{
"text": "was a better solution available.",
"start": 3817.2,
"duration": 2.95
},
{
"text": "So we would like some way\nto solve this problem.",
"start": 3820.15,
"duration": 3.05
},
{
"text": "We like the idea of\nthis heuristic, of being",
"start": 3823.2,
"duration": 2.64
},
{
"text": "able to estimate the path, the\ndistance between us and the goal,",
"start": 3825.84,
"duration": 4.24
},
{
"text": "and that helps us to be able\nto make better decisions",
"start": 3830.08,
"duration": 2.21
},
{
"text": "and to eliminate having to search\nthrough entire parts of the state",
"start": 3832.29,
"duration": 3.6
},
{
"text": "space.",
"start": 3835.89,
"duration": 1.18
},
{
"text": "But we would like to modify the\nalgorithm so that we can achieve",
"start": 3837.07,
"duration": 3.09
},
{
"text": "optimality, so that it can be optimal.",
"start": 3840.16,
"duration": 2.49
},
{
"text": "And what is the way to do this?",
"start": 3842.65,
"duration": 1.41
},
{
"text": "What is the intuition here?",
"start": 3844.06,
"duration": 1.73
},
{
"text": "Well, let's take a look at this problem.",
"start": 3845.79,
"duration": 2.52
},
{
"text": "In this initial problem,\ngreedy best-first search",
"start": 3848.31,
"duration": 2.76
},
{
"text": "found this solution\nhere, this long path.",
"start": 3851.07,
"duration": 3.1
},
{
"text": "And the reason why it wasn't great is\nbecause, yes, the heuristic numbers",
"start": 3854.17,
"duration": 3.14
},
{
"text": "went down pretty low, but later on,\nand they started to build back up.",
"start": 3857.31,
"duration": 3.87
},
{
"text": "They built back 8, 9, 10, 11-- all\nthe way up to 12, in this case.",
"start": 3861.18,
"duration": 4.6
},
{
"text": "And so how might we go about\ntrying to improve this algorithm?",
"start": 3865.78,
"duration": 3.51
},
{
"text": "Well, one thing that we\nmight realize is that, if we",
"start": 3869.29,
"duration": 3.11
},
{
"text": "go all the way through this\nalgorithm, through this path,",
"start": 3872.4,
"duration": 2.77
},
{
"text": "and we end up going to the 12, and we've\nhad to take this many steps-- like,",
"start": 3875.17,
"duration": 4.17
},
{
"text": "who knows how many steps that\nis-- just to get to this 12,",
"start": 3879.34,
"duration": 3.3
},
{
"text": "we could have also, as an alternative,\ntaken much fewer steps, just six steps,",
"start": 3882.64,
"duration": 5.52
},
{
"text": "and ended up at this 13 here.",
"start": 3888.16,
"duration": 2.01
},
{
"text": "And yes, 13 is more than 12, so\nit looks like it's not as good,",
"start": 3890.17,
"duration": 3.51
},
{
"text": "but it required far fewer steps.",
"start": 3893.68,
"duration": 1.78
},
{
"text": "Right?",
"start": 3895.46,
"duration": 0.5
},
{
"text": "It only took six steps to get to\nthis 13 versus many more steps",
"start": 3895.96,
"duration": 3.57
},
{
"text": "to get to this 12.",
"start": 3899.53,
"duration": 1.47
},
{
"text": "And while greedy best-first search\nsays, oh, well, 12 is better than 13",
"start": 3901.0,
"duration": 3.18
},
{
"text": "so pick the 12, we might\nmore intelligently say,",
"start": 3904.18,
"duration": 3.67
},
{
"text": "I'd rather be somewhere\nthat heuristically",
"start": 3907.85,
"duration": 2.96
},
{
"text": "looks like it takes slightly longer\nif I can get there much more quickly.",
"start": 3910.81,
"duration": 5.22
},
{
"text": "And we're going to encode\nthat idea, this general idea,",
"start": 3916.03,
"duration": 2.91
},
{
"text": "into a more formal algorithm\nknown as A star search.",
"start": 3918.94,
"duration": 4.07
},
{
"text": "A star search is going\nto solve this problem by,",
"start": 3923.01,
"duration": 2.26
},
{
"text": "instead of just\nconsidering the heuristic,",
"start": 3925.27,
"duration": 2.7
},
{
"text": "also considering how long it took\nus to get to any particular state.",
"start": 3927.97,
"duration": 4.75
},
{
"text": "So the distinction is greedy\nbest-first search, if I am in a state",
"start": 3932.72,
"duration": 2.93
},
{
"text": "right now, the only\nthing I care about is",
"start": 3935.65,
"duration": 2.79
},
{
"text": "what is the estimated distance,\nthe heuristic value, between me",
"start": 3938.44,
"duration": 3.63
},
{
"text": "and the goal.",
"start": 3942.07,
"duration": 0.93
},
{
"text": "Whereas A star search will\ntake into consideration",
"start": 3943.0,
"duration": 2.64
},
{
"text": "two pieces of information.",
"start": 3945.64,
"duration": 1.23
},
{
"text": "It'll take into consideration, how\nfar do I estimate I am from the goal,",
"start": 3946.87,
"duration": 4.26
},
{
"text": "but also how far did I have to\ntravel in order to get here?",
"start": 3951.13,
"duration": 3.9
},
{
"text": "Because that is relevant, too.",
"start": 3955.03,
"duration": 2.46
},
{
"text": "So we'll search algorithms by\nexpanding the node with the lowest",
"start": 3957.49,
"duration": 3.27
},
{
"text": "value of g(n) plus h(n).",
"start": 3960.76,
"duration": 3.3
},
{
"text": "h(n) is that same heuristic that we were\ntalking about a moment ago that's going",
"start": 3964.06,
"duration": 3.75
},
{
"text": "to vary based on the problem, but\ng(n) is going to be the cost to reach",
"start": 3967.81,
"duration": 4.83
},
{
"text": "the node--",
"start": 3972.64,
"duration": 0.78
},
{
"text": "how many steps I had to take, in this\ncase, to get to my current position.",
"start": 3973.42,
"duration": 5.96
},
{
"text": "So what does that search\nalgorithm look like in practice?",
"start": 3979.38,
"duration": 2.69
},
{
"text": "Well, let's take a look.",
"start": 3982.07,
"duration": 1.56
},
{
"text": "Again, we've got the same maze.",
"start": 3983.63,
"duration": 1.5
},
{
"text": "And again, I've labeled them\nwith their Manhattan distance.",
"start": 3985.13,
"duration": 2.88
},
{
"text": "This value is the h(n)\nvalue, the heuristic estimate",
"start": 3988.01,
"duration": 4.05
},
{
"text": "of how far each of these\nsquares is away from the goal.",
"start": 3992.06,
"duration": 4.18
},
{
"text": "But now, as we begin\nto explore states, we",
"start": 3996.24,
"duration": 2.13
},
{
"text": "care not just about this\nheuristic value but also",
"start": 3998.37,
"duration": 3.0
},
{
"text": "about g(n), the number of steps I\nhad to take in order to get there.",
"start": 4001.37,
"duration": 4.28
},
{
"text": "And I care about summing\nthose two numbers together.",
"start": 4005.65,
"duration": 2.41
},
{
"text": "So what does that look like?",
"start": 4008.06,
"duration": 1.17
},
{
"text": "On this very first step,\nI have taken one step.",
"start": 4009.23,
"duration": 3.62
},
{
"text": "And now I am estimated to be\n16 steps away from the goal.",
"start": 4012.85,
"duration": 3.28
},
{
"text": "So the total value here is 17.",
"start": 4016.13,
"duration": 3.06
},
{
"text": "Then I take one more step.",
"start": 4019.19,
"duration": 1.09
},
{
"text": "I've now taken two steps.",
"start": 4020.28,
"duration": 1.73
},
{
"text": "And I estimate myself to\nbe 15 away from the goal--",
"start": 4022.01,
"duration": 2.77
},
{
"text": "again, a total value of 17.",
"start": 4024.78,
"duration": 1.96
},
{
"text": "Now I've taken three steps.",
"start": 4026.74,
"duration": 1.49
},
{
"text": "And I'm estimated to be 14 away\nfrom the goal, so on and so forth.",
"start": 4028.23,
"duration": 3.21
},
{
"text": "Four steps, an estimate of 13.",
"start": 4031.44,
"duration": 2.31
},
{
"text": "Five steps, estimate of 12.",
"start": 4033.75,
"duration": 2.07
},
{
"text": "And now, here's a decision point.",
"start": 4035.82,
"duration": 2.16
},
{
"text": "I could either be six steps away\nfrom the goal with a heuristic of 13",
"start": 4037.98,
"duration": 4.74
},
{
"text": "for a total of 19, or I\ncould be six steps away",
"start": 4042.72,
"duration": 3.72
},
{
"text": "from the goal with a heuristic of 11\nwith an estimate of 17 for the total.",
"start": 4046.44,
"duration": 5.2
},
{
"text": "So between 19 and 17,\nI'd rather take the 17--",
"start": 4051.64,
"duration": 3.59
},
{
"text": "the 6 plus 11.",
"start": 4055.23,
"duration": 1.94
},
{
"text": "So so far, no different\nthan what we saw before.",
"start": 4057.17,
"duration": 2.0
},
{
"text": "We're still taking this option\nbecause it appears to be better.",
"start": 4059.17,
"duration": 2.99
},
{
"text": "And I keep taking this option\nbecause it appears to be better.",
"start": 4062.16,
"duration": 3.0
},
{
"text": "But it's right about here that\nthings get a little bit different.",
"start": 4065.16,
"duration": 4.27
},
{
"text": "Now I could be 15 steps away from the\ngoal with an estimated distance of 6.",
"start": 4069.43,
"duration": 6.2
},
{
"text": "So 15 plus 6, total value of 21.",
"start": 4075.63,
"duration": 3.12
},
{
"text": "Alternatively, I could be six\nsteps away from the goal--",
"start": 4078.75,
"duration": 3.12
},
{
"text": "because this was five steps\naway, so this is six steps away--",
"start": 4081.87,
"duration": 2.82
},
{
"text": "with a total value of 13 as my estimate.",
"start": 4084.69,
"duration": 2.66
},
{
"text": "So 6 plus 13--",
"start": 4087.35,
"duration": 1.3
},
{
"text": "that's 19.",
"start": 4088.65,
"duration": 1.53
},
{
"text": "So here we would evaluate\ng(n) plus h(n) to be 19--",
"start": 4090.18,
"duration": 3.96
},
{
"text": "6 plus 13-- whereas here, we\nwould be 15 plus 6, or 21.",
"start": 4094.14,
"duration": 6.3
},
{
"text": "And so the intuition is,\n19 less than 21, pick here.",
"start": 4100.44,
"duration": 3.34
},
{
"text": "But the idea is ultimately I'd rather be\nhaving taken fewer steps to get to a 13",
"start": 4103.78,
"duration": 5.41
},
{
"text": "than having taken 15\nsteps and be at a six",
"start": 4109.19,
"duration": 3.42
},
{
"text": "because it means I've had to take\nmore steps in order to get there.",
"start": 4112.61,
"duration": 2.8
},
{
"text": "Maybe there's a better path this way.",
"start": 4115.41,
"duration": 3.11
},
{
"text": "So instead we'll explore this route.",
"start": 4118.52,
"duration": 2.53
},
{
"text": "Now if we go one more--\nthis is seven steps plus 14,",
"start": 4121.05,
"duration": 2.79
},
{
"text": "is 21, so between those\ntwo it's sort of a toss up.",
"start": 4123.84,
"duration": 3.0
},
{
"text": "We might end up exploring\nthat one anyways.",
"start": 4126.84,
"duration": 2.13
},
{
"text": "But after that, as these numbers start\nto get bigger in the heuristic values",
"start": 4128.97,
"duration": 4.3
},
{
"text": "and these heuristic values\nstart to get smaller,",
"start": 4133.27,
"duration": 2.33
},
{
"text": "you'll find that we'll actually\nkeep exploring down this path.",
"start": 4135.6,
"duration": 3.5
},
{
"text": "And you can do the math to see\nthat at every decision point,",
"start": 4139.1,
"duration": 3.19
},
{
"text": "A star search is going to make a choice\nbased on the sum of how many steps",
"start": 4142.29,
"duration": 4.52
},
{
"text": "it took me to get to my\ncurrent position and then",
"start": 4146.81,
"duration": 2.89
},
{
"text": "how far I estimate I am from the goal.",
"start": 4149.7,
"duration": 3.48
},
{
"text": "So while we did have to\nexplore some of these states,",
"start": 4153.18,
"duration": 2.58
},
{
"text": "the ultimate solution we found\nwas, in fact, an optimal solution.",
"start": 4155.76,
"duration": 4.71
},
{
"text": "It did find us the quickest possible\nway to get from the initial state",
"start": 4160.47,
"duration": 4.32
},
{
"text": "to the goal.",
"start": 4164.79,
"duration": 1.01
},
{
"text": "And it turns out that A* is an\noptimal search algorithm under certain",
"start": 4165.8,
"duration": 4.0
},
{
"text": "conditions.",
"start": 4169.8,
"duration": 1.59
},
{
"text": "So the conditions are h of n, my\nheuristic, needs to be admissible.",
"start": 4171.39,
"duration": 4.47
},
{
"text": "What does it mean for a\nheuristic to be admissible?",
"start": 4175.86,
"duration": 2.13
},
{
"text": "Well, a heuristic is admissible if\nit never overestimates the true cost.",
"start": 4177.99,
"duration": 4.71
},
{
"text": "Each event always needs to\neither get it exactly right",
"start": 4182.7,
"duration": 3.56
},
{
"text": "in terms of how far away I am,\nor it needs to underestimate.",
"start": 4186.26,
"duration": 4.26
},
{
"text": "So we saw an example from before where\nthe heuristic value was much smaller",
"start": 4190.52,
"duration": 4.08
},
{
"text": "than the actual cost it would take.",
"start": 4194.6,
"duration": 1.75
},
{
"text": "That's totally fine.",
"start": 4196.35,
"duration": 1.36
},
{
"text": "But the heuristic value\nshould never overestimate.",
"start": 4197.71,
"duration": 2.39
},
{
"text": "It should never think that I'm further\naway from the goal than I actually am.",
"start": 4200.1,
"duration": 4.42
},
{
"text": "And meanwhile, to make a\nstronger statement, h of n",
"start": 4204.52,
"duration": 3.11
},
{
"text": "also needs to be consistent.",
"start": 4207.63,
"duration": 2.25
},
{
"text": "And what does it mean\nfor it to be consistent?",
"start": 4209.88,
"duration": 1.92
},
{
"text": "Mathematically, it means\nthat for every node, which",
"start": 4211.8,
"duration": 2.7
},
{
"text": "we'll call n, and successor, the node\nafter me, that I'll call n prime,",
"start": 4214.5,
"duration": 4.62
},
{
"text": "where it takes a cost of c to make\nthat step, the heuristic value of n",
"start": 4219.12,
"duration": 5.1
},
{
"text": "needs to be less than or\nequal to the heuristic",
"start": 4224.22,
"duration": 2.34
},
{
"text": "value of n prime plus the cost.",
"start": 4226.56,
"duration": 2.5
},
{
"text": "So it's a lot of math, but\nin words, what it ultimately",
"start": 4229.06,
"duration": 2.3
},
{
"text": "means is that if I am here\nat this state right now,",
"start": 4231.36,
"duration": 3.51
},
{
"text": "the heuristic value from me to the goal\nshouldn't be more than the heuristic",
"start": 4234.87,
"duration": 4.38
},
{
"text": "value of my successor, the next place\nI could go to, plus however much",
"start": 4239.25,
"duration": 4.83
},
{
"text": "it would cost me to just make that\nstep, from one step to the next step.",
"start": 4244.08,
"duration": 4.48
},
{
"text": "And so this is just making sure that\nmy heuristic is consistent between all",
"start": 4248.56,
"duration": 3.92
},
{
"text": "of these steps that I might take.",
"start": 4252.48,
"duration": 1.53
},
{
"text": "So as long as this is true, then A*\nsearch is going to find me an optimal",
"start": 4254.01,
"duration": 4.53
},
{
"text": "solution.",
"start": 4258.54,
"duration": 0.93
},
{
"text": "And this is where much of the challenge\nof solving these search problems can",
"start": 4259.47,
"duration": 3.3
},
{
"text": "sometimes come in, that A* search\nis an algorithm that is known,",
"start": 4262.77,
"duration": 3.21
},
{
"text": "and you could write\nthe code fairly easily.",
"start": 4265.98,
"duration": 1.98
},
{
"text": "But it's choosing the heuristic that\ncan be the interesting challenge.",
"start": 4267.96,
"duration": 3.3
},
{
"text": "The better the heuristic\nis, the better I'll",
"start": 4271.26,
"duration": 2.28
},
{
"text": "be able to solve the problem, and the\nfewer states that I'll have to explore.",
"start": 4273.54,
"duration": 3.33
},
{
"text": "And I need to make sure\nthat the heuristic satisfies",
"start": 4276.87,
"duration": 3.33
},
{
"text": "these particular constraints.",
"start": 4280.2,
"duration": 2.34
},
{
"text": "So all in all, these are some of\nthe examples of search algorithms",
"start": 4282.54,
"duration": 3.36
},
{
"text": "that might work.",
"start": 4285.9,
"duration": 0.75
},
{
"text": "And certainly, there are\nmany more than just this.",
"start": 4286.65,
"duration": 2.43
},
{
"text": "A*, for example, does have a tendency\nto use quite a bit of memory,",
"start": 4289.08,
"duration": 3.51
},
{
"text": "so there are alternative approaches to\nA* that ultimately use less memory than",
"start": 4292.59,
"duration": 4.98
},
{
"text": "this version of A* happens to use.",
"start": 4297.57,
"duration": 2.63
},
{
"text": "And there are other search algorithms\nthat are optimized for other cases",
"start": 4300.2,
"duration": 3.46
},
{
"text": "as well.",
"start": 4303.66,
"duration": 1.9
},
{
"text": "But now, so far, we've only been\nlooking at search algorithms",
"start": 4305.56,
"duration": 2.9
},
{
"text": "where there's one agent.",
"start": 4308.46,
"duration": 2.4
},
{
"text": "I am trying to find a\nsolution to a problem.",
"start": 4310.86,
"duration": 2.61
},
{
"text": "I am trying to navigate\nmy way through a maze.",
"start": 4313.47,
"duration": 2.43
},
{
"text": "I am trying to solve a 15 puzzle.",
"start": 4315.9,
"duration": 1.98
},
{
"text": "I am trying to find driving\ndirections from point A to point B.",
"start": 4317.88,
"duration": 4.31
},
{
"text": "Sometimes in search\nsituations, though, we'll",
"start": 4322.19,
"duration": 2.23
},
{
"text": "enter an adversarial\nsituation where I am",
"start": 4324.42,
"duration": 3.18
},
{
"text": "an agent trying to make\nintelligent decisions,",
"start": 4327.6,
"duration": 2.41
},
{
"text": "and there is someone else who is\nfighting against me, so to speak,",
"start": 4330.01,
"duration": 3.08
},
{
"text": "that has an opposite objective,\nsomeone where I am trying to succeed,",
"start": 4333.09,
"duration": 3.48
},
{
"text": "someone else that wants me to fail.",
"start": 4336.57,
"duration": 2.61
},
{
"text": "And this is most popular in something\nlike a game, a game like tic-tac-toe,",
"start": 4339.18,
"duration": 4.5
},
{
"text": "where we've got this\n3-by-3 grid, and X and O",
"start": 4343.68,
"duration": 2.67
},
{
"text": "take turns either writing an X or\nan O in any one of these squares.",
"start": 4346.35,
"duration": 3.78
},
{
"text": "And the goal is to get three X's\nin a row, if you're the X player,",
"start": 4350.13,
"duration": 3.48
},
{
"text": "or three O's in a row,\nif you're the O player.",
"start": 4353.61,
"duration": 3.0
},
{
"text": "And computers have gotten quite good at\nplaying games, tic-tac-toe very easily,",
"start": 4356.61,
"duration": 3.78
},
{
"text": "but even more complex games.",
"start": 4360.39,
"duration": 1.98
},
{
"text": "And so you might imagine, what does\nan intelligent decision in a game look",
"start": 4362.37,
"duration": 4.2
},
{
"text": "like?",
"start": 4366.57,
"duration": 0.72
},
{
"text": "So maybe X makes an initial move\nin the middle, and O plays up here.",
"start": 4367.29,
"duration": 3.84
},
{
"text": "What does an intelligent\nmove for X now become?",
"start": 4371.13,
"duration": 3.21
},
{
"text": "Where should you move if you were X?",
"start": 4374.34,
"duration": 2.05
},
{
"text": "And it turns out there are\na couple of possibilities.",
"start": 4376.39,
"duration": 2.28
},
{
"text": "But if an AI is playing\nthis game optimally,",
"start": 4378.67,
"duration": 2.45
},
{
"text": "then the AI might play somewhere\nlike the upper right, where",
"start": 4381.12,
"duration": 3.06
},
{
"text": "in this situation, O has\nthe opposite objective of X.",
"start": 4384.18,
"duration": 3.62
},
{
"text": "X is trying to win the game, to\nget three in a row diagonally here,",
"start": 4387.8,
"duration": 3.97
},
{
"text": "and O is trying to stop that\nobjective, opposite of the objective.",
"start": 4391.77,
"duration": 3.64
},
{
"text": "And so O is going to place\nhere, to try to block.",
"start": 4395.41,
"duration": 2.45
},
{
"text": "But now, X has a pretty clever move.",
"start": 4397.86,
"duration": 2.4
},
{
"text": "X can make a move, like this where\nnow X has two possible ways that X",
"start": 4400.26,
"duration": 4.92
},
{
"text": "can win the game.",
"start": 4405.18,
"duration": 0.9
},
{
"text": "X could win the game by getting\nthree in a row across here,",
"start": 4406.08,
"duration": 2.97
},
{
"text": "or X could win the game by getting\nthree in a row vertically this way.",
"start": 4409.05,
"duration": 3.33
},
{
"text": "So it doesn't matter where\nO makes their next move.",
"start": 4412.38,
"duration": 2.23
},
{
"text": "O could play here, for example, blocking\nthe three in a row horizontally,",
"start": 4414.61,
"duration": 3.56
},
{
"text": "but then X is going to win the game by\ngetting a three in a row vertically.",
"start": 4418.17,
"duration": 5.03
},
{
"text": "And so there's a fair\namount of reasoning",
"start": 4423.2,
"duration": 1.81
},
{
"text": "that's going on here in order for the\ncomputer to be able to solve a problem.",
"start": 4425.01,
"duration": 3.21
},
{
"text": "And it's similar in spirit to the\nproblems we've looked at so far.",
"start": 4428.22,
"duration": 3.3
},
{
"text": "There are actions, there's some\nsort of state of the board,",
"start": 4431.52,
"duration": 3.06
},
{
"text": "and some transition from\none action to the next,",
"start": 4434.58,
"duration": 2.58
},
{
"text": "but it's different in the sense that\nthis is now not just a classical search",
"start": 4437.16,
"duration": 3.57
},
{
"text": "problem, but an adversarial search\nproblem, that I am the X player,",
"start": 4440.73,
"duration": 4.17
},
{
"text": "trying to find the best\nmoves to make, but I",
"start": 4444.9,
"duration": 2.19
},
{
"text": "know that there is some adversary\nthat is trying to stop me.",
"start": 4447.09,
"duration": 3.55
},
{
"text": "So we need some sort of algorithm\nto deal with these adversarial type",
"start": 4450.64,
"duration": 4.16
},
{
"text": "of search situations.",
"start": 4454.8,
"duration": 1.64
},
{
"text": "And the algorithm we're\ngoing to take a look at",
"start": 4456.44,
"duration": 1.96
},
{
"text": "is an algorithm called\nMinimax, which works",
"start": 4458.4,
"duration": 2.46
},
{
"text": "very well for these deterministic\ngames, where there are two players.",
"start": 4460.86,
"duration": 3.96
},
{
"text": "It can work for other types of games as\nwell, but we'll look right now at games",
"start": 4464.82,
"duration": 3.3
},
{
"text": "where I make a move, that my opponent\nmakes a move, and I am trying to win,",
"start": 4468.12,
"duration": 3.87
},
{
"text": "and my opponent is trying to win, also.",
"start": 4471.99,
"duration": 2.28
},
{
"text": "Or in other words, my opponent\nis trying to get me to lose.",
"start": 4474.27,
"duration": 3.83
},
{
"text": "And so what do we need in order\nto make this algorithm work?",
"start": 4478.1,
"duration": 2.8
},
{
"text": "Well, anytime we try and translate\nthis human concept, of playing a game,",
"start": 4480.9,
"duration": 3.9
},
{
"text": "winning, and losing,\nto a computer, we want",
"start": 4484.8,
"duration": 2.58
},
{
"text": "to translate it in terms that\nthe computer can understand.",
"start": 4487.38,
"duration": 2.85
},
{
"text": "And ultimately, the computer\nreally just understands numbers.",
"start": 4490.23,
"duration": 3.51
},
{
"text": "And so we want some way of\ntranslating a game of X's and O's",
"start": 4493.74,
"duration": 3.15
},
{
"text": "on a grid to something numerical,\nsomething the computer can understand.",
"start": 4496.89,
"duration": 3.6
},
{
"text": "The computer doesn't normally\nunderstand notions of win or lose,",
"start": 4500.49,
"duration": 3.81
},
{
"text": "but it does understand the\nconcept of bigger and smaller.",
"start": 4504.3,
"duration": 4.06
},
{
"text": "And so what we might yet do is, we\nmight take each of the possible ways",
"start": 4508.36,
"duration": 3.74
},
{
"text": "that a tic-tac-toe game can unfold\nand assign a value, or a utility,",
"start": 4512.1,
"duration": 5.01
},
{
"text": "to each one of those possible ways.",
"start": 4517.11,
"duration": 1.95
},
{
"text": "And in a tic-tac-toe game,\nand in many types of games,",
"start": 4519.06,
"duration": 2.73
},
{
"text": "there are three possible outcomes.",
"start": 4521.79,
"duration": 2.01
},
{
"text": "The outcomes are, O wins,\nX wins, or nobody wins.",
"start": 4523.8,
"duration": 4.41
},
{
"text": "So player one wins, player\ntwo wins, or nobody wins.",
"start": 4528.21,
"duration": 4.23
},
{
"text": "And for now, let's go ahead and\nassign each of these possible outcomes",
"start": 4532.44,
"duration": 4.23
},
{
"text": "a different value.",
"start": 4536.67,
"duration": 1.2
},
{
"text": "We'll say O winning--",
"start": 4537.87,
"duration": 1.23
},
{
"text": "that'll have a value of negative 1.",
"start": 4539.1,
"duration": 2.13
},
{
"text": "Nobody winning-- that'll\nhave a value of 0.",
"start": 4541.23,
"duration": 2.4
},
{
"text": "And X winning-- that\nwill have a value of 1.",
"start": 4543.63,
"duration": 3.07
},
{
"text": "So we've just assigned numbers to\neach of these three possible outcomes.",
"start": 4546.7,
"duration": 4.16
},
{
"text": "And now, we have two players.",
"start": 4550.86,
"duration": 2.62
},
{
"text": "We have the X player and the O player.",
"start": 4553.48,
"duration": 2.78
},
{
"text": "And we're going to go ahead and\ncall the X player the max player.",
"start": 4556.26,
"duration": 4.03
},
{
"text": "And we'll call the O\nplayer the min player.",
"start": 4560.29,
"duration": 2.85
},
{
"text": "And the reason why is because\nin the Minimax algorithm,",
"start": 4563.14,
"duration": 2.78
},
{
"text": "the max player, which in this case is\nX, is aiming to maximize the score.",
"start": 4565.92,
"duration": 5.46
},
{
"text": "These are the possible options for\nthe score, negative 1, 0, and 1.",
"start": 4571.38,
"duration": 3.28
},
{
"text": "X wants to maximize the score,\nmeaning if at all possible,",
"start": 4574.66,
"duration": 3.74
},
{
"text": "X would like this situation\nwhere X wins the game.",
"start": 4578.4,
"duration": 3.58
},
{
"text": "And we give it a score of 1.",
"start": 4581.98,
"duration": 1.61
},
{
"text": "But if this isn't possible, if X\nneeds to choose between these two",
"start": 4583.59,
"duration": 3.42
},
{
"text": "options, negative 1 meaning O\nwinning, or 0 meaning nobody winning,",
"start": 4587.01,
"duration": 4.92
},
{
"text": "X would rather that nobody wins, score\nof 0, than a score of negative 1,",
"start": 4591.93,
"duration": 5.13
},
{
"text": "O winning.",
"start": 4597.06,
"duration": 1.3
},
{
"text": "So this notion of winning\nand losing in time",
"start": 4598.36,
"duration": 2.72
},
{
"text": "has been reduced mathematically to\njust this idea of, try and maximize",
"start": 4601.08,
"duration": 4.33
},
{
"text": "the score.",
"start": 4605.41,
"duration": 0.66
},
{
"text": "The X player always wants\nthe score to be bigger.",
"start": 4606.07,
"duration": 3.85
},
{
"text": "And on the flip side, the\nmin player, in this case, O,",
"start": 4609.92,
"duration": 2.96
},
{
"text": "is aiming to minimize the score.",
"start": 4612.88,
"duration": 1.74
},
{
"text": "The O player wants the score\nto be as small as possible.",
"start": 4614.62,
"duration": 4.89
},
{
"text": "So now we've taken this game of\nX's and O's and winning and losing",
"start": 4619.51,
"duration": 3.31
},
{
"text": "and turned it into something\nmathematical, something",
"start": 4622.82,
"duration": 2.17
},
{
"text": "where X is trying to maximize the score,\nO is trying to minimize the score.",
"start": 4624.99,
"duration": 4.57
},
{
"text": "Let's now look at all\nof the parts of the game",
"start": 4629.56,
"duration": 2.03
},
{
"text": "that we need in order\nto encode it in an AI",
"start": 4631.59,
"duration": 3.06
},
{
"text": "so that an AI can play\na game like tic-tac-toe.",
"start": 4634.65,
"duration": 4.08
},
{
"text": "So the game is going to\nneed a couple of things.",
"start": 4638.73,
"duration": 2.04
},
{
"text": "We'll need some sort of initial\nstate, that we'll in this case",
"start": 4640.77,
"duration": 2.58
},
{
"text": "call S0, which is how the game begins,\nlike an empty tic-tac-toe board,",
"start": 4643.35,
"duration": 4.17
},
{
"text": "for example.",
"start": 4647.52,
"duration": 1.23
},
{
"text": "We'll also need a\nfunction called player,",
"start": 4648.75,
"duration": 3.66
},
{
"text": "where the player function is going to\ntake as input a state, here represented",
"start": 4652.41,
"duration": 4.62
},
{
"text": "by S. And the output of the\nplayer function is going to be,",
"start": 4657.03,
"duration": 4.02
},
{
"text": "which player's turn is it?",
"start": 4661.05,
"duration": 2.4
},
{
"text": "We need to be able to give a\ntic-tac-toe board to the computer,",
"start": 4663.45,
"duration": 2.91
},
{
"text": "run it through a function, and that\nfunction tells us whose turn it is.",
"start": 4666.36,
"duration": 4.08
},
{
"text": "We'll need some notion of\nactions that we can take.",
"start": 4670.44,
"duration": 2.43
},
{
"text": "We'll see examples of\nthat in just a moment.",
"start": 4672.87,
"duration": 2.11
},
{
"text": "We need some notion of a\ntransition model-- same as before.",
"start": 4674.98,
"duration": 2.93
},
{
"text": "If I have a state, and\nI take an action, I",
"start": 4677.91,
"duration": 2.25
},
{
"text": "need to know what results\nas a consequence of it.",
"start": 4680.16,
"duration": 2.88
},
{
"text": "I need some way of knowing\nwhen the game is over.",
"start": 4683.04,
"duration": 2.77
},
{
"text": "So this is equivalent to\nkind of like a goal test,",
"start": 4685.81,
"duration": 2.09
},
{
"text": "but I need some terminal\ntest, some way to check",
"start": 4687.9,
"duration": 2.43
},
{
"text": "to see if a state is a terminal\nstate, where a terminal state means",
"start": 4690.33,
"duration": 3.9
},
{
"text": "the game is over.",
"start": 4694.23,
"duration": 1.05
},
{
"text": "In the classic game of tic-tac-toe , a\nterminal state means either someone has",
"start": 4695.28,
"duration": 4.77
},
{
"text": "gotten three in a row, or all of the\nsquares of the tic-tac-toe board are",
"start": 4700.05,
"duration": 3.33
},
{
"text": "filled.",
"start": 4703.38,
"duration": 0.66
},
{
"text": "Either of those conditions\nmake it a terminal state.",
"start": 4704.04,
"duration": 2.57
},
{
"text": "In a game of chess, it\nmight be something like,",
"start": 4706.61,
"duration": 1.96
},
{
"text": "when there is checkmate, or if\ncheckmate is no longer possible,",
"start": 4708.57,
"duration": 3.09
},
{
"text": "that becomes a terminal state.",
"start": 4711.66,
"duration": 2.71
},
{
"text": "And then finally we'll need a utility\nfunction, a function that takes a state",
"start": 4714.37,
"duration": 4.04
},
{
"text": "and gives us a numerical value for that\nterminal state, some way of saying,",
"start": 4718.41,
"duration": 3.48
},
{
"text": "if X wins the game,\nthat has a value of 1.",
"start": 4721.89,
"duration": 2.64
},
{
"text": "If O has won the game, that\nhas the value of negative 1.",
"start": 4724.53,
"duration": 2.52
},
{
"text": "If nobody has won the game,\nthat has a value of 0.",
"start": 4727.05,
"duration": 3.3
},
{
"text": "So let's take a look at\neach of these in turn.",
"start": 4730.35,
"duration": 2.44
},
{
"text": "The initial state, we can just represent\nin tic-tac-toe as the empty game board.",
"start": 4732.79,
"duration": 4.28
},
{
"text": "This is where we begin.",
"start": 4737.07,
"duration": 1.39
},
{
"text": "It's the place from which\nwe begin this search.",
"start": 4738.46,
"duration": 2.57
},
{
"text": "And again, I'll be representing\nthese things visually.",
"start": 4741.03,
"duration": 2.51
},
{
"text": "But you can imagine\nthis really just being",
"start": 4743.54,
"duration": 1.87
},
{
"text": "an array, or a two-dimensional array,\nof all of these possible squares.",
"start": 4745.41,
"duration": 4.71
},
{
"text": "Then we need the player function\nthat, again, takes a state",
"start": 4750.12,
"duration": 3.39
},
{
"text": "and tells us whose turn it is.",
"start": 4753.51,
"duration": 1.74
},
{
"text": "Assuming X makes the first move,\nif I have an empty game board,",
"start": 4755.25,
"duration": 3.42
},
{
"text": "then my player function\nis going to return X",
"start": 4758.67,
"duration": 2.84
},
{
"text": "And if I have a game board where X has\nmade a move, that my player function is",
"start": 4761.51,
"duration": 3.61
},
{
"text": "going to return O. The player function\ntakes a tic-tac-toe game board",
"start": 4765.12,
"duration": 3.72
},
{
"text": "and tells us whose turn it is.",
"start": 4768.84,
"duration": 3.33
},
{
"text": "Next up, we'll consider\nthe actions function.",
"start": 4772.17,
"duration": 2.7
},
{
"text": "The actions function, much like it\ndid in classical search, takes a state",
"start": 4774.87,
"duration": 4.17
},
{
"text": "and gives us the set of\nall of the possible actions",
"start": 4779.04,
"duration": 2.82
},
{
"text": "we can take in that state.",
"start": 4781.86,
"duration": 2.53
},
{
"text": "So let's imagine it's O's turn to move\nin a game board that looks like this.",
"start": 4784.39,
"duration": 4.97
},
{
"text": "What happens when we pass it\ninto the actions function?",
"start": 4789.36,
"duration": 2.76
},
{
"text": "So the actions function takes\nthis state of the game as input,",
"start": 4792.12,
"duration": 3.87
},
{
"text": "and the output is a set of\npossible actions it's a set of--",
"start": 4795.99,
"duration": 3.87
},
{
"text": "I could move in the upper left, or\nI could move in the bottom middle.",
"start": 4799.86,
"duration": 3.72
},
{
"text": "Those are the two possible\naction choices that I have",
"start": 4803.58,
"duration": 2.91
},
{
"text": "when I begin in this particular state.",
"start": 4806.49,
"duration": 3.7
},
{
"text": "Now, just as before, when\nwe add states and actions,",
"start": 4810.19,
"duration": 2.91
},
{
"text": "we need some sort of\ntransition model to tell us,",
"start": 4813.1,
"duration": 2.37
},
{
"text": "when we take this action in the state,\nwhat is the new state that we get?",
"start": 4815.47,
"duration": 4.03
},
{
"text": "And here, we define that using\nthe result function that takes",
"start": 4819.5,
"duration": 2.93
},
{
"text": "a state as input, as well as an action.",
"start": 4822.43,
"duration": 3.03
},
{
"text": "And when we apply the result\nfunction to this state,",
"start": 4825.46,
"duration": 3.03
},
{
"text": "saying that let's let O move in this\nupper left corner, the new state we get",
"start": 4828.49,
"duration": 4.62
},
{
"text": "is this resulting state, where\nO is in the upper-left corner.",
"start": 4833.11,
"duration": 2.73
},
{
"text": "And now, this seems obvious to someone\nwho knows how to play tic-tac-toe.",
"start": 4835.84,
"duration": 3.04
},
{
"text": "Of course, you play in\nthe upper left corner--",
"start": 4838.88,
"duration": 1.95
},
{
"text": "that's the board you get.",
"start": 4840.83,
"duration": 1.04
},
{
"text": "But all of this information\nneeds to be encoded into the AI.",
"start": 4841.87,
"duration": 3.19
},
{
"text": "The AI doesn't know\nhow to play tic-tac-toe",
"start": 4845.06,
"duration": 2.51
},
{
"text": "until you tell the AI how the\nrules of tic-tac-toe work.",
"start": 4847.57,
"duration": 3.6
},
{
"text": "And this function,\ndefining the function here,",
"start": 4851.17,
"duration": 2.46
},
{
"text": "allows us to tell the AI\nhow this game actually works",
"start": 4853.63,
"duration": 3.42
},
{
"text": "and how actions actually\naffect the outcome of the game.",
"start": 4857.05,
"duration": 4.15
},
{
"text": "So the AI needs to know\nhow the game works.",
"start": 4861.2,
"duration": 2.39
},
{
"text": "The AI also needs to know\nwhen the game is over.",
"start": 4863.59,
"duration": 2.49
},
{
"text": "That is by defining a function\ncalled terminal that takes as input",
"start": 4866.08,
"duration": 3.24
},
{
"text": "a state S, such that if we take\na game that is not yet over,",
"start": 4869.32,
"duration": 3.88
},
{
"text": "pass it into the terminal\nfunction, the output is false.",
"start": 4873.2,
"duration": 2.93
},
{
"text": "The game is not over.",
"start": 4876.13,
"duration": 1.41
},
{
"text": "But if we take a game that is\nover, because X has gotten three",
"start": 4877.54,
"duration": 3.15
},
{
"text": "in a row along that diagonal, pass\nthat into the terminal function,",
"start": 4880.69,
"duration": 3.67
},
{
"text": "then the output is going to be true,\nbecause the game now is, in fact, over.",
"start": 4884.36,
"duration": 4.56
},
{
"text": "And finally, we've told\nthe AI how the game works",
"start": 4888.92,
"duration": 3.01
},
{
"text": "in terms of what moves can be made and\nwhat happens when you make those moves.",
"start": 4891.93,
"duration": 3.25
},
{
"text": "We've told the AI when the game is over.",
"start": 4895.18,
"duration": 2.11
},
{
"text": "Now we need to tell the AI what the\nvalue of each of those states is.",
"start": 4897.29,
"duration": 3.98
},
{
"text": "And we do that by defining this utility\nfunction, that takes a state, S,",
"start": 4901.27,
"duration": 3.9
},
{
"text": "and tells us the score or\nthe utility of that state.",
"start": 4905.17,
"duration": 3.67
},
{
"text": "So again, we said that if X wins the\ngame, that utility is a value of 1,",
"start": 4908.84,
"duration": 3.92
},
{
"text": "whereas if O wins the game, then\nthe utility of that is negative 1.",
"start": 4912.76,
"duration": 4.59
},
{
"text": "And the AI needs to know, for\neach of these terminal states",
"start": 4917.35,
"duration": 2.85
},
{
"text": "where the game is over, what\nis the utility of that state?",
"start": 4920.2,
"duration": 4.6
},
{
"text": "So I can give you a game board like\nthis, where the game is, in fact, over,",
"start": 4924.8,
"duration": 3.59
},
{
"text": "and I ask the AI to tell me what the\nvalue of that state is, it could do so.",
"start": 4928.39,
"duration": 4.32
},
{
"text": "The value of the state is 1.",
"start": 4932.71,
"duration": 3.12
},
{
"text": "Where things get interesting, though,\nis if the game is not yet over.",
"start": 4935.83,
"duration": 4.41
},
{
"text": "Let's imagine a game board like this.",
"start": 4940.24,
"duration": 1.66
},
{
"text": "We're in the middle of the game.",
"start": 4941.9,
"duration": 1.43
},
{
"text": "It's O's turn to make a move.",
"start": 4943.33,
"duration": 2.52
},
{
"text": "So how do we know it's\nO's turn to make a move?",
"start": 4945.85,
"duration": 2.13
},
{
"text": "We can calculate that,\nusing the player function.",
"start": 4947.98,
"duration": 2.13
},
{
"text": "We can say, player of\nS, pass in the state.",
"start": 4950.11,
"duration": 3.15
},
{
"text": "O is the answer, so we\nknow it's O's turn to move.",
"start": 4953.26,
"duration": 2.98
},
{
"text": "And now, what is the value of this\nboard, and what action should O take?",
"start": 4956.24,
"duration": 4.55
},
{
"text": "Well that's going to depend.",
"start": 4960.79,
"duration": 1.17
},
{
"text": "We have to do some calculation here.",
"start": 4961.96,
"duration": 1.8
},
{
"text": "And this is where the Minimax\nalgorithm really comes in.",
"start": 4963.76,
"duration": 3.69
},
{
"text": "Recall that X is trying to\nmaximize the score, which means",
"start": 4967.45,
"duration": 3.39
},
{
"text": "that O is trying to minimize the score.",
"start": 4970.84,
"duration": 2.88
},
{
"text": "O would like to minimize the total value\nthat we get at the end of the game.",
"start": 4973.72,
"duration": 5.15
},
{
"text": "And because this game isn't\nover yet, we don't really",
"start": 4978.87,
"duration": 2.44
},
{
"text": "know just yet what the\nvalue of this game board is.",
"start": 4981.31,
"duration": 3.51
},
{
"text": "We have to do some calculation\nin order to figure that out.",
"start": 4984.82,
"duration": 3.47
},
{
"text": "So how do we do that\nkind of calculation?",
"start": 4988.29,
"duration": 2.14
},
{
"text": "Well, in order to do so,\nwe're going to consider,",
"start": 4990.43,
"duration": 2.61
},
{
"text": "just as we might in a\nclassical search situation,",
"start": 4993.04,
"duration": 2.64
},
{
"text": "what actions could happen next, and\nwhat states will that take us to?",
"start": 4995.68,
"duration": 4.48
},
{
"text": "And it turns out that\nin this position, there",
"start": 5000.16,
"duration": 1.91
},
{
"text": "are only two open squares, which means\nthere are only two open places where",
"start": 5002.07,
"duration": 3.96
},
{
"text": "O can make a move.",
"start": 5006.03,
"duration": 2.61
},
{
"text": "O could either make a\nmove in the upper left,",
"start": 5008.64,
"duration": 2.44
},
{
"text": "or O can make a move\nin the bottom middle.",
"start": 5011.08,
"duration": 3.08
},
{
"text": "And Minimax doesn't know right out\nof the box which of those moves",
"start": 5014.16,
"duration": 2.82
},
{
"text": "is going to be better, so\nit's going to consider both.",
"start": 5016.98,
"duration": 3.54
},
{
"text": "But now we run into the same situation.",
"start": 5020.52,
"duration": 1.8
},
{
"text": "Now I have two more game boards,\nneither of which is over.",
"start": 5022.32,
"duration": 2.85
},
{
"text": "What happens next?",
"start": 5025.17,
"duration": 1.49
},
{
"text": "And now it's in this\nsense that Minimax is",
"start": 5026.66,
"duration": 1.75
},
{
"text": "what we'll call a recursive algorithm.",
"start": 5028.41,
"duration": 2.25
},
{
"text": "It's going to now repeat the\nexact same process, although now",
"start": 5030.66,
"duration": 4.14
},
{
"text": "considering it from the\nopposite perspective.",
"start": 5034.8,
"duration": 2.84
},
{
"text": "It's as if I am now going to put\nmyself-- if I am the O player,",
"start": 5037.64,
"duration": 3.64
},
{
"text": "I'm going to put myself in my opponent's\nshoes, my opponent as the X player,",
"start": 5041.28,
"duration": 4.26
},
{
"text": "and consider, what would my opponent\ndo if they were in this position?",
"start": 5045.54,
"duration": 4.47
},
{
"text": "What would my opponent do, the X\nplayer, if they were in that position?",
"start": 5050.01,
"duration": 4.08
},
{
"text": "And what would then happen?",
"start": 5054.09,
"duration": 1.15
},
{
"text": "Well, the other player,\nmy opponent, the X player,",
"start": 5055.24,
"duration": 3.02
},
{
"text": "is trying to maximize the\nscore, whereas I am trying",
"start": 5058.26,
"duration": 2.94
},
{
"text": "to minimize the score as the O player.",
"start": 5061.2,
"duration": 2.04
},
{
"text": "So X is trying to find the maximum\npossible value that they can get.",
"start": 5063.24,
"duration": 4.31
},
{
"text": "And so what's going to happen?",
"start": 5067.55,
"duration": 1.84
},
{
"text": "Well, from this board position,\nX only has one choice.",
"start": 5069.39,
"duration": 3.39
},
{
"text": "X is going to play here, and\nthey're going to get three in a row.",
"start": 5072.78,
"duration": 2.82
},
{
"text": "And we know that that board, X winning--",
"start": 5075.6,
"duration": 2.31
},
{
"text": "that has a value of 1.",
"start": 5077.91,
"duration": 1.62
},
{
"text": "If X wins the game, the value\nof that game board is 1.",
"start": 5079.53,
"duration": 3.69
},
{
"text": "And so from this position, if this\nstate can only ever lead to this state,",
"start": 5083.22,
"duration": 5.46
},
{
"text": "it's the only possible option,\nand this state has a value of 1,",
"start": 5088.68,
"duration": 3.96
},
{
"text": "then the maximum possible value that the\nX player can get from this game board",
"start": 5092.64,
"duration": 4.59
},
{
"text": "is also 1 from here.",
"start": 5097.23,
"duration": 1.66
},
{
"text": "The only place we can get is\nto a game with the value of 1,",
"start": 5098.89,
"duration": 2.78
},
{
"text": "so this game board\nalso has a value of 1.",
"start": 5101.67,
"duration": 3.59
},
{
"text": "Now we consider this one over here.",
"start": 5105.26,
"duration": 2.38
},
{
"text": "What's going to happen now?",
"start": 5107.64,
"duration": 1.22
},
{
"text": "Well, X needs to make a move.",
"start": 5108.86,
"duration": 1.6
},
{
"text": "The only move X can make is in the\nupper left, so X will go there.",
"start": 5110.46,
"duration": 3.05
},
{
"text": "And in this game, no one wins the game.",
"start": 5113.51,
"duration": 1.84
},
{
"text": "Nobody has three in a row.",
"start": 5115.35,
"duration": 1.59
},
{
"text": "So the value of that game board is 0.",
"start": 5116.94,
"duration": 2.66
},
{
"text": "Nobody's won.",
"start": 5119.6,
"duration": 1.29
},
{
"text": "And so again, by the same logic, if\nfrom this board position, the only place",
"start": 5120.89,
"duration": 3.99
},
{
"text": "we can get to is a board\nwhere the value is 0,",
"start": 5124.88,
"duration": 2.88
},
{
"text": "then this state must\nalso have a value of 0.",
"start": 5127.76,
"duration": 3.53
},
{
"text": "And now here comes the choice part,\nthe idea of trying to minimize.",
"start": 5131.29,
"duration": 4.03
},
{
"text": "I, as the O player, now know\nthat if I make this choice,",
"start": 5135.32,
"duration": 3.65
},
{
"text": "moving in the upper left, that is going\nto result in a game with a value of 1,",
"start": 5138.97,
"duration": 4.17
},
{
"text": "assuming everyone plays optimally.",
"start": 5143.14,
"duration": 2.1
},
{
"text": "And if I instead play\nin the lower middle,",
"start": 5145.24,
"duration": 1.8
},
{
"text": "choose this fork in the road, that\nis going to result in a game board",
"start": 5147.04,
"duration": 3.21
},
{
"text": "with a value of 0.",
"start": 5150.25,
"duration": 1.26
},
{
"text": "I have two options.",
"start": 5151.51,
"duration": 1.24
},
{
"text": "I have a 1 and a 0 to choose\nfrom, and I need to pick.",
"start": 5152.75,
"duration": 3.63
},
{
"text": "And as the min player, I\nwould rather choose the option",
"start": 5156.38,
"duration": 3.14
},
{
"text": "with the minimum value.",
"start": 5159.52,
"duration": 1.42
},
{
"text": "So whenever a player\nhas multiple choices,",
"start": 5160.94,
"duration": 2.15
},
{
"text": "the min player will choose the\noption with the smallest value.",
"start": 5163.09,
"duration": 2.94
},
{
"text": "The max player will choose the\noption with the largest value.",
"start": 5166.03,
"duration": 2.67
},
{
"text": "Between the 1 in the\n0, the 0 is smaller,",
"start": 5168.7,
"duration": 2.77
},
{
"text": "meaning I'd rather tie the\ngame than lose the game.",
"start": 5171.47,
"duration": 3.27
},
{
"text": "And so this game board, we'll\nsay, also has a value of 0,",
"start": 5174.74,
"duration": 3.35
},
{
"text": "because if I am playing optimally,\nI will pick this fork in the road.",
"start": 5178.09,
"duration": 4.2
},
{
"text": "I'll place my O here to\nblock X's three in a row.",
"start": 5182.29,
"duration": 3.0
},
{
"text": "X will move in the upper left,\nand the game will be over,",
"start": 5185.29,
"duration": 2.79
},
{
"text": "and no one will have won the game.",
"start": 5188.08,
"duration": 2.1
},
{
"text": "So this is now the logic of Minimax,\nto consider all of the possible options",
"start": 5190.18,
"duration": 4.08
},
{
"text": "that I can take, all of the\nactions that I can take,",
"start": 5194.26,
"duration": 2.88
},
{
"text": "and then to put myself\nin my opponent's shoes.",
"start": 5197.14,
"duration": 2.25
},
{
"text": "I decide what move I'm going to\nmake now by considering what move",
"start": 5199.39,
"duration": 3.54
},
{
"text": "my opponent will make on the next turn.",
"start": 5202.93,
"duration": 1.95
},
{
"text": "And to do that, I consider what move\nI would make on the turn after that,",
"start": 5204.88,
"duration": 3.35
},
{
"text": "so on and so forth, until I get all\nthe way down to the end of the game,",
"start": 5208.23,
"duration": 4.33
},
{
"text": "to one of these so-called\nterminal states.",
"start": 5212.56,
"duration": 2.49
},
{
"text": "In fact, this very\ndecision point, where I",
"start": 5215.05,
"duration": 2.43
},
{
"text": "am trying to decide as the O player\nwhat to make a decision about,",
"start": 5217.48,
"duration": 3.15
},
{
"text": "might have just been a part of the\nlogic that the X player, my opponent,",
"start": 5220.63,
"duration": 4.14
},
{
"text": "was using the move before me.",
"start": 5224.77,
"duration": 1.59
},
{
"text": "This might be part of\nsome larger tree where",
"start": 5226.36,
"duration": 2.79
},
{
"text": "X is trying to make a\nmove in this situation",
"start": 5229.15,
"duration": 2.43
},
{
"text": "and needs to pick between\nthree different options",
"start": 5231.58,
"duration": 2.25
},
{
"text": "in order to make a decision\nabout what to happen.",
"start": 5233.83,
"duration": 2.61
},
{
"text": "And the further and further away\nwe are from the end of the game,",
"start": 5236.44,
"duration": 2.82
},
{
"text": "the deeper this tree has to go,\nbecause every level in this tree",
"start": 5239.26,
"duration": 4.2
},
{
"text": "is going to correspond to one move,\none move or action that I take,",
"start": 5243.46,
"duration": 4.29
},
{
"text": "one move or action that my opponent\ntakes, in order to decide what happens.",
"start": 5247.75,
"duration": 4.6
},
{
"text": "And in fact, it turns out that if\nI am the X player in this position,",
"start": 5252.35,
"duration": 3.62
},
{
"text": "and I recursively do the logic\nand see I have a choice--",
"start": 5255.97,
"duration": 2.66
},
{
"text": "three choices, in fact, one of which\nleads to a value of 0, if I play here,",
"start": 5258.63,
"duration": 4.54
},
{
"text": "and if everyone plays optimally,\nthe game will be a tie.",
"start": 5263.17,
"duration": 2.73
},
{
"text": "If I play here, then O is going to\nwin, and I'll lose, playing optimally.",
"start": 5265.9,
"duration": 5.1
},
{
"text": "Or here, where I, the\nX player, can win--",
"start": 5271.0,
"duration": 2.61
},
{
"text": "well, between a score of\n0 and negative 1 and 1,",
"start": 5273.61,
"duration": 3.62
},
{
"text": "I'd rather pick the\nboard with a value of 1,",
"start": 5277.23,
"duration": 1.84
},
{
"text": "because that's the\nmaximum value I can get.",
"start": 5279.07,
"duration": 2.2
},
{
"text": "And so this board would also\nhave a maximum value of 1.",
"start": 5281.27,
"duration": 4.28
},
{
"text": "And so this tree can\nget very, very deep,",
"start": 5285.55,
"duration": 2.26
},
{
"text": "especially as the game starts\nto have more and more moves.",
"start": 5287.81,
"duration": 3.59
},
{
"text": "And this logic works not\njust for tic-tac-toe,",
"start": 5291.4,
"duration": 2.07
},
{
"text": "but any of these sorts of games where I\nmake a move, my opponent makes a move,",
"start": 5293.47,
"duration": 3.48
},
{
"text": "and ultimately, we have\nthese adversarial objectives.",
"start": 5296.95,
"duration": 3.51
},
{
"text": "And we can simplify the diagram\ninto a diagram that looks like this.",
"start": 5300.46,
"duration": 3.88
},
{
"text": "This is a more abstract\nversion of the Minimax tree,",
"start": 5304.34,
"duration": 2.9
},
{
"text": "where these are each states, but I'm\nno longer representing them as exactly",
"start": 5307.24,
"duration": 3.21
},
{
"text": "like tic-tac-toe boards.",
"start": 5310.45,
"duration": 1.38
},
{
"text": "This is just representing some generic\ngame that might be tic-tac-toe,",
"start": 5311.83,
"duration": 3.86
},
{
"text": "might be some other game altogether.",
"start": 5315.69,
"duration": 2.44
},
{
"text": "Any of these green arrows\nthat are pointing up--",
"start": 5318.13,
"duration": 2.46
},
{
"text": "that represents a maximizing state.",
"start": 5320.59,
"duration": 2.11
},
{
"text": "I would like the score\nto be as big as possible.",
"start": 5322.7,
"duration": 2.59
},
{
"text": "And any of these red\narrows pointing down--",
"start": 5325.29,
"duration": 2.14
},
{
"text": "those are minimizing states, where\nthe player is the min player,",
"start": 5327.43,
"duration": 3.15
},
{
"text": "and they are trying to make\nthe score as small as possible.",
"start": 5330.58,
"duration": 3.61
},
{
"text": "So if you imagine in this situation, I\nam the maximizing player, this player",
"start": 5334.19,
"duration": 4.01
},
{
"text": "here, and I have three choices--",
"start": 5338.2,
"duration": 2.35
},
{
"text": "one choice gives me a score of 5,\none choice gives me a score of 3,",
"start": 5340.55,
"duration": 3.62
},
{
"text": "and one choice gives me a score of 9.",
"start": 5344.17,
"duration": 2.14
},
{
"text": "Well, then, between those\nthree choices, my best option",
"start": 5346.31,
"duration": 3.74
},
{
"text": "is to choose this 9 over here, the\nscore that maximizes my options out",
"start": 5350.05,
"duration": 4.38
},
{
"text": "of all the three options.",
"start": 5354.43,
"duration": 1.53
},
{
"text": "And so I can give this\nstate a value of 9,",
"start": 5355.96,
"duration": 2.91
},
{
"text": "because among my three\noptions, that is the best",
"start": 5358.87,
"duration": 2.34
},
{
"text": "choice that I have available to me.",
"start": 5361.21,
"duration": 3.12
},
{
"text": "So that's my decision now.",
"start": 5364.33,
"duration": 1.47
},
{
"text": "You imagine it's like one move\naway from the end of the game.",
"start": 5365.8,
"duration": 3.78
},
{
"text": "But then you could also\nask a reasonable question.",
"start": 5369.58,
"duration": 2.09
},
{
"text": "What might my opponent do two moves\naway from the end of the game?",
"start": 5371.67,
"duration": 3.63
},
{
"text": "My opponent is the minimizing player.",
"start": 5375.3,
"duration": 1.74
},
{
"text": "They are trying to make the\nscore as small as possible.",
"start": 5377.04,
"duration": 2.77
},
{
"text": "Imagine what would have happened if\nthey had to pick which choice to make.",
"start": 5379.81,
"duration": 3.88
},
{
"text": "One choice leads us to this state,\nwhere I, the maximizing player,",
"start": 5383.69,
"duration": 3.68
},
{
"text": "am going to opt for 9, the\nbiggest score that I can get.",
"start": 5387.37,
"duration": 3.47
},
{
"text": "And one leads to this state,\nwhere I, the maximizing player,",
"start": 5390.84,
"duration": 4.29
},
{
"text": "would choose 8, which is then\nthe largest score than I can get.",
"start": 5395.13,
"duration": 3.85
},
{
"text": "Now, the minimizing player, forced\nto choose between a 9 or an 8,",
"start": 5398.98,
"duration": 3.8
},
{
"text": "is going to choose the smallest possible\nscore, which in this case is an 8.",
"start": 5402.78,
"duration": 4.32
},
{
"text": "And that is, then, how\nthis process would unfold.",
"start": 5407.1,
"duration": 2.35
},
{
"text": "But the minimizing player,\nin this case, considers",
"start": 5409.45,
"duration": 2.21
},
{
"text": "both of their options, and\nthen all of the options",
"start": 5411.66,
"duration": 2.52
},
{
"text": "that would happen as a result of that.",
"start": 5414.18,
"duration": 3.41
},
{
"text": "So this now is a general picture of\nwhat the Minimax algorithm looks like.",
"start": 5417.59,
"duration": 3.83
},
{
"text": "Let's now try to formalize it\nusing a little bit of pseudocode.",
"start": 5421.42,
"duration": 3.28
},
{
"text": "So what exactly is happening\nin the Minimax algorithm?",
"start": 5424.7,
"duration": 3.02
},
{
"text": "Well, given a state, S, we\nneed to decide what to happen.",
"start": 5427.72,
"duration": 3.78
},
{
"text": "The max player-- if it's\nthe max player's turn, then",
"start": 5431.5,
"duration": 3.48
},
{
"text": "max is going to pick an action,\nA, in actions of S. Recall",
"start": 5434.98,
"duration": 4.56
},
{
"text": "that actions is a function\nthat takes a state",
"start": 5439.54,
"duration": 2.55
},
{
"text": "and gives me back all of the\npossible actions that I can take.",
"start": 5442.09,
"duration": 2.88
},
{
"text": "It tells me all of the\nmoves that are possible.",
"start": 5444.97,
"duration": 3.8
},
{
"text": "The max player is going\nto specifically pick",
"start": 5448.77,
"duration": 1.84
},
{
"text": "an action, A, in the set\nof actions that gives me",
"start": 5450.61,
"duration": 3.36
},
{
"text": "the highest value of min value of result\nof S and A. So what does that mean?",
"start": 5453.97,
"duration": 7.45
},
{
"text": "Well, it means that I want to\nmake the option that gives me",
"start": 5461.42,
"duration": 2.93
},
{
"text": "the highest score of\nall of the actions, A.",
"start": 5464.35,
"duration": 3.48
},
{
"text": "But what score is that going to have?",
"start": 5467.83,
"duration": 1.8
},
{
"text": "To calculate that, I need to know\nwhat my opponent, the min player,",
"start": 5469.63,
"duration": 3.15
},
{
"text": "is going to do if they try to minimize\nthe value of the state that results.",
"start": 5472.78,
"duration": 5.52
},
{
"text": "So we say, what state results\nafter I take this action,",
"start": 5478.3,
"duration": 3.96
},
{
"text": "and what happens when\nthe min player tries",
"start": 5482.26,
"duration": 2.31
},
{
"text": "to minimize the value of that state?",
"start": 5484.57,
"duration": 3.0
},
{
"text": "I consider that for all\nof my possible options.",
"start": 5487.57,
"duration": 2.69
},
{
"text": "And after I've considered that\nfor all of my possible options,",
"start": 5490.26,
"duration": 2.59
},
{
"text": "I pick the action, A, that\nhas the highest value.",
"start": 5492.85,
"duration": 3.92
},
{
"text": "Likewise, the min player is going\nto do the same thing, but backwards.",
"start": 5496.77,
"duration": 3.32
},
{
"text": "They're also going to consider, what\nare all of the possible actions they",
"start": 5500.09,
"duration": 3.09
},
{
"text": "can take if it's their turn?",
"start": 5503.18,
"duration": 1.62
},
{
"text": "And they're going to pick the\naction, A, that has the smallest",
"start": 5504.8,
"duration": 2.94
},
{
"text": "possible value of all the options.",
"start": 5507.74,
"duration": 2.48
},
{
"text": "And the way they know what the smallest\npossible value of all the options is,",
"start": 5510.22,
"duration": 3.21
},
{
"text": "is by considering what the\nmax player is going to do,",
"start": 5513.43,
"duration": 4.06
},
{
"text": "by saying, what's the result of applying\nthis action to the current state,",
"start": 5517.49,
"duration": 4.23
},
{
"text": "and then, what would the\nmax player try to do?",
"start": 5521.72,
"duration": 1.92
},
{
"text": "What value would the max player\ncalculate for that particular state?",
"start": 5523.64,
"duration": 4.36
},
{
"text": "So everyone makes their decision\nbased on trying to estimate",
"start": 5528.0,
"duration": 3.26
},
{
"text": "what the other person would do.",
"start": 5531.26,
"duration": 2.34
},
{
"text": "And now we need to turn\nour attention to these two",
"start": 5533.6,
"duration": 2.33
},
{
"text": "functions, maxValue and minValue.",
"start": 5535.93,
"duration": 2.61
},
{
"text": "How do you actually calculate\nthe value of a state",
"start": 5538.54,
"duration": 3.15
},
{
"text": "if you're trying to maximize\nits value, and how do you",
"start": 5541.69,
"duration": 2.58
},
{
"text": "calculate the value of a state if\nyou're trying to minimize the value?",
"start": 5544.27,
"duration": 3.42
},
{
"text": "If you can do that, then we\nhave an entire implementation",
"start": 5547.69,
"duration": 3.06
},
{
"text": "of this Minimax algorithm.",
"start": 5550.75,
"duration": 2.17
},
{
"text": "So let's try it.",
"start": 5552.92,
"duration": 0.69
},
{
"text": "Let's try and implement\nthis maxValue function",
"start": 5553.61,
"duration": 2.87
},
{
"text": "that takes a state and returns\nas output the value of that state",
"start": 5556.48,
"duration": 4.15
},
{
"text": "if I'm trying to maximize\nthe value of the state.",
"start": 5560.63,
"duration": 3.23
},
{
"text": "Well, the first thing I can check\nfor is to see if the game is over,",
"start": 5563.86,
"duration": 3.0
},
{
"text": "because if the game is over--",
"start": 5566.86,
"duration": 1.24
},
{
"text": "in other words, if the\nstate is a terminal state--",
"start": 5568.1,
"duration": 2.72
},
{
"text": "then this is easy.",
"start": 5570.82,
"duration": 1.17
},
{
"text": "I already have this utility\nfunction that tells me",
"start": 5571.99,
"duration": 2.94
},
{
"text": "what the value of the board is.",
"start": 5574.93,
"duration": 1.35
},
{
"text": "If the game is over, I\njust check, did X win?",
"start": 5576.28,
"duration": 2.38
},
{
"text": "Did O win?",
"start": 5578.66,
"duration": 0.5
},
{
"text": "Is that a tie?",
"start": 5579.16,
"duration": 0.99
},
{
"text": "And the utility function just knows\nwhat the value of the state is.",
"start": 5580.15,
"duration": 4.05
},
{
"text": "What's trickier is if\nthe game isn't over,",
"start": 5584.2,
"duration": 2.23
},
{
"text": "because then I need to do this\nrecursive reasoning about thinking,",
"start": 5586.43,
"duration": 2.81
},
{
"text": "what is my opponent going\nto do on the next move?",
"start": 5589.24,
"duration": 3.39
},
{
"text": "Then I want to calculate\nthe value of this state,",
"start": 5592.63,
"duration": 3.2
},
{
"text": "and I want the value of the\nstate to be as high as possible.",
"start": 5595.83,
"duration": 3.23
},
{
"text": "And I'll keep track of that\nvalue in a variable called v.",
"start": 5599.06,
"duration": 2.84
},
{
"text": "And if I want the value\nto be as high as possible,",
"start": 5601.9,
"duration": 2.67
},
{
"text": "I need to give v an initial value.",
"start": 5604.57,
"duration": 2.76
},
{
"text": "And initially, I'll just go ahead\nand set it to be as low as possible,",
"start": 5607.33,
"duration": 4.17
},
{
"text": "because I don't know what\noptions are available to me yet.",
"start": 5611.5,
"duration": 3.15
},
{
"text": "So initially, I'll set v equal\nto negative infinity, which",
"start": 5614.65,
"duration": 3.96
},
{
"text": "seems a little bit\nstrange, but the idea here",
"start": 5618.61,
"duration": 2.01
},
{
"text": "is, I want the value initially\nto be as low as possible,",
"start": 5620.62,
"duration": 3.06
},
{
"text": "because as I consider\nmy actions, I'm always",
"start": 5623.68,
"duration": 2.55
},
{
"text": "going to try and do better than v.\nAnd if I set v to negative infinity,",
"start": 5626.23,
"duration": 3.99
},
{
"text": "I know I can always do better than that.",
"start": 5630.22,
"duration": 2.63
},
{
"text": "So now I consider my actions.",
"start": 5632.85,
"duration": 2.11
},
{
"text": "And this is going to\nbe some kind of loop,",
"start": 5634.96,
"duration": 1.75
},
{
"text": "where for every action\nin actions of state--",
"start": 5636.71,
"duration": 3.36
},
{
"text": "recall, actions is a\nfunction that takes my state",
"start": 5640.07,
"duration": 2.76
},
{
"text": "and gives me all the possible\nactions that I can use in that state.",
"start": 5642.83,
"duration": 3.82
},
{
"text": "So for each one of those actions,\nI want to compare it to v and say,",
"start": 5646.65,
"duration": 4.92
},
{
"text": "all right, v is going to be equal to\nthe maximum of v and this expression.",
"start": 5651.57,
"duration": 6.72
},
{
"text": "So what is this expression?",
"start": 5658.29,
"duration": 1.8
},
{
"text": "Well, first it is, get the result\nof taking the action and the state,",
"start": 5660.09,
"duration": 4.55
},
{
"text": "and then get the min value of that.",
"start": 5664.64,
"duration": 3.61
},
{
"text": "In other words, let's\nsay, I want to find out",
"start": 5668.25,
"duration": 2.84
},
{
"text": "from that state what is the\nbest that the min player can do,",
"start": 5671.09,
"duration": 3.13
},
{
"text": "because they are going to\ntry and minimize the score.",
"start": 5674.22,
"duration": 2.21
},
{
"text": "So whatever the resulting score\nis of the min value of that state,",
"start": 5676.43,
"duration": 3.78
},
{
"text": "compare it to my current best value,\nand just pick the maximum of those two,",
"start": 5680.21,
"duration": 3.66
},
{
"text": "because I am trying\nto maximize the value.",
"start": 5683.87,
"duration": 2.59
},
{
"text": "In short, what these three\nlines of code are doing",
"start": 5686.46,
"duration": 2.09
},
{
"text": "are going through all of my possible\nactions and asking the question,",
"start": 5688.55,
"duration": 3.91
},
{
"text": "how do I maximize the score, given\nwhat my opponent is going to try to do?",
"start": 5692.46,
"duration": 5.42
},
{
"text": "After this entire loop,\nI can just return v,",
"start": 5697.88,
"duration": 2.76
},
{
"text": "and that is now the value\nof that particular state.",
"start": 5700.64,
"duration": 3.43
},
{
"text": "And for the min player, it's the exact\nopposite of this, the same logic,",
"start": 5704.07,
"duration": 3.84
},
{
"text": "just backwards.",
"start": 5707.91,
"duration": 1.02
},
{
"text": "To calculate the minimum\nvalue of a state,",
"start": 5708.93,
"duration": 1.98
},
{
"text": "first we check if it's a terminal state.",
"start": 5710.91,
"duration": 1.86
},
{
"text": "If it is, we return its utility.",
"start": 5712.77,
"duration": 2.19
},
{
"text": "Otherwise, we're going to now try\nto minimize the value of the state,",
"start": 5714.96,
"duration": 4.32
},
{
"text": "given all of my possible actions.",
"start": 5719.28,
"duration": 2.01
},
{
"text": "So I need an initial value\nfor v, the value of the state.",
"start": 5721.29,
"duration": 3.51
},
{
"text": "And initially, I'll set it to\ninfinity, because I know it can always",
"start": 5724.8,
"duration": 3.63
},
{
"text": "get something less than infinity.",
"start": 5728.43,
"duration": 1.96
},
{
"text": "So by starting with v equals infinity,\nI make sure that the very first action",
"start": 5730.39,
"duration": 3.53
},
{
"text": "I find--",
"start": 5733.92,
"duration": 0.66
},
{
"text": "that will be less than this value of v.",
"start": 5734.58,
"duration": 2.97
},
{
"text": "And then I do the same thing--",
"start": 5737.55,
"duration": 1.26
},
{
"text": "loop over all of my possible\nactions, and for each",
"start": 5738.81,
"duration": 2.82
},
{
"text": "of the results that we could get when\nthe max player makes their decision,",
"start": 5741.63,
"duration": 4.11
},
{
"text": "let's take the minimum of that\nand the current value of v.",
"start": 5745.74,
"duration": 3.42
},
{
"text": "So after all is said and done I get\nthe smallest possible value of v,",
"start": 5749.16,
"duration": 4.05
},
{
"text": "that I then return back to the user.",
"start": 5753.21,
"duration": 3.27
},
{
"text": "So that, in effect, is the\npseudocode for Minimax.",
"start": 5756.48,
"duration": 2.54
},
{
"text": "That is how we take a game and\nfigure out what the best move to make",
"start": 5759.02,
"duration": 2.97
},
{
"text": "is by recursively using these\nmaxValue and minValue functions, where",
"start": 5761.99,
"duration": 4.5
},
{
"text": "maxValue calls minValue,\nminValue calls maxValue, back",
"start": 5766.49,
"duration": 3.57
},
{
"text": "and forth, all the way until we reach\na terminal state, at which point",
"start": 5770.06,
"duration": 3.48
},
{
"text": "our algorithm can simply return the\nutility of that particular state.",
"start": 5773.54,
"duration": 5.21
},
{
"text": "What you might imagine\nis that this is going",
"start": 5778.75,
"duration": 1.84
},
{
"text": "to start to be a long process,\nespecially as games start",
"start": 5780.59,
"duration": 3.18
},
{
"text": "to get more complex, as we start to add\nmore moves and more possible options",
"start": 5783.77,
"duration": 4.29
},
{
"text": "and games that might\nlast quite a bit longer.",
"start": 5788.06,
"duration": 2.65
},
{
"text": "So the next question to ask is, what\nsort of optimizations can we make here?",
"start": 5790.71,
"duration": 3.65
},
{
"text": "How can we do better in\norder to use less space",
"start": 5794.36,
"duration": 3.42
},
{
"text": "or take less time to be able\nto solve this kind of problem?",
"start": 5797.78,
"duration": 4.33
},
{
"text": "And we'll take a look at a\ncouple of possible optimizations.",
"start": 5802.11,
"duration": 2.63
},
{
"text": "But for one, we'll take\na look at this example.",
"start": 5804.74,
"duration": 2.59
},
{
"text": "Again, we're turning to these\nup arrows and down arrows.",
"start": 5807.33,
"duration": 2.52
},
{
"text": "Let's imagine that I now am the\nmax player, this green arrow.",
"start": 5809.85,
"duration": 4.22
},
{
"text": "I am trying to make the\nscore as high as possible.",
"start": 5814.07,
"duration": 3.19
},
{
"text": "And this is an easy game,\nwhere there are just two moves.",
"start": 5817.26,
"duration": 3.11
},
{
"text": "I make a move, one of\nthese three options,",
"start": 5820.37,
"duration": 2.61
},
{
"text": "and then my opponent makes a\nmove, one of these three options,",
"start": 5822.98,
"duration": 2.91
},
{
"text": "based on what move I make.",
"start": 5825.89,
"duration": 1.4
},
{
"text": "And as a result, we get some value.",
"start": 5827.29,
"duration": 3.02
},
{
"text": "Let's look at the order in\nwhich I do these calculations",
"start": 5830.31,
"duration": 3.14
},
{
"text": "and figure out if there are any\noptimizations I might be able to make",
"start": 5833.45,
"duration": 3.36
},
{
"text": "to this calculation process.",
"start": 5836.81,
"duration": 1.95
},
{
"text": "I'm going to have to look at\nthese states one at a time.",
"start": 5838.76,
"duration": 2.44
},
{
"text": "So let's say I start here on\nthe left and say, all right, now",
"start": 5841.2,
"duration": 2.54
},
{
"text": "I'm going to consider, what will the\nmin player, my opponent, try to do here?",
"start": 5843.74,
"duration": 4.56
},
{
"text": "Well, the min player is going to look\nat all three of their possible actions",
"start": 5848.3,
"duration": 3.51
},
{
"text": "and look at their value, because\nthese are terminal states.",
"start": 5851.81,
"duration": 2.46
},
{
"text": "They're the end of the game.",
"start": 5854.27,
"duration": 1.16
},
{
"text": "And so they'll see, all right, this\nnode is a value of 4, value of 8,",
"start": 5855.43,
"duration": 3.4
},
{
"text": "value of 5.",
"start": 5858.83,
"duration": 1.58
},
{
"text": "And the min player is going\nto say, well, all right.",
"start": 5860.41,
"duration": 2.39
},
{
"text": "Between these three\noptions, 4, 8, and 5,",
"start": 5862.8,
"duration": 2.99
},
{
"text": "I'll take the smallest\none I'll take the 4.",
"start": 5865.79,
"duration": 2.28
},
{
"text": "So this state now has a value of 4.",
"start": 5868.07,
"duration": 2.54
},
{
"text": "Then I as the max player say,\nall right, if I take this action,",
"start": 5870.61,
"duration": 3.45
},
{
"text": "it will have a value of 4.",
"start": 5874.06,
"duration": 1.11
},
{
"text": "That's the best that I\ncan do, because min player",
"start": 5875.17,
"duration": 2.16
},
{
"text": "is going to try and minimize my score.",
"start": 5877.33,
"duration": 2.48
},
{
"text": "So now, what if I take this option?",
"start": 5879.81,
"duration": 1.46
},
{
"text": "We'll explore this next.",
"start": 5881.27,
"duration": 1.35
},
{
"text": "And now I explore what the min player\nwould do if I choose this action.",
"start": 5882.62,
"duration": 3.63
},
{
"text": "And the min player is going to say,\nall right, what are the three options?",
"start": 5886.25,
"duration": 3.1
},
{
"text": "The min player has options\nbetween 9, 3, and 7, and so 3",
"start": 5889.35,
"duration": 5.09
},
{
"text": "is the smallest among 9, 3, and 7.",
"start": 5894.44,
"duration": 2.07
},
{
"text": "So we'll go ahead and say\nthis state has a value of 3.",
"start": 5896.51,
"duration": 3.13
},
{
"text": "So now I, as the max player--",
"start": 5899.64,
"duration": 1.34
},
{
"text": "I have now explored two\nof my three options.",
"start": 5900.98,
"duration": 2.37
},
{
"text": "I know that one of my options will\nguarantee me a score of 4, at least,",
"start": 5903.35,
"duration": 4.04
},
{
"text": "and one of my options will\nguarantee me a score of 3.",
"start": 5907.39,
"duration": 3.71
},
{
"text": "And now I consider my third option\nand say, all right, what happens here?",
"start": 5911.1,
"duration": 3.06
},
{
"text": "Same exact logic-- the\nmin player is going",
"start": 5914.16,
"duration": 1.75
},
{
"text": "to look at these three\nstates, 2, 4, and 6,",
"start": 5915.91,
"duration": 2.21
},
{
"text": "say the minimum possible option is\n2, so the min player wants the two.",
"start": 5918.12,
"duration": 4.36
},
{
"text": "Now I, as the max player, have\ncalculated all of the information",
"start": 5922.48,
"duration": 3.3
},
{
"text": "by looking two layers deep, by\nlooking at all of these nodes.",
"start": 5925.78,
"duration": 3.44
},
{
"text": "And I can now say, between the 4,\nthe 3, and the 2, you know what?",
"start": 5929.22,
"duration": 3.55
},
{
"text": "I'd rather take the\n4, because if I choose",
"start": 5932.77,
"duration": 2.85
},
{
"text": "this option, if my\nopponent plays optimally,",
"start": 5935.62,
"duration": 2.58
},
{
"text": "they will try and get me to the\n4, but that's the best I can do.",
"start": 5938.2,
"duration": 3.42
},
{
"text": "I can't guarantee a\nhigher score, because if I",
"start": 5941.62,
"duration": 2.55
},
{
"text": "pick either of these two options, I\nmight get a 3, or I might get a 2.",
"start": 5944.17,
"duration": 3.57
},
{
"text": "And it's true that down\nhere is a 9, and that's",
"start": 5947.74,
"duration": 2.88
},
{
"text": "the highest score of any of the scores.",
"start": 5950.62,
"duration": 1.77
},
{
"text": "So I might be tempted\nto say, you know what?",
"start": 5952.39,
"duration": 1.84
},
{
"text": "Maybe I should take this option,\nbecause I might get the 9.",
"start": 5954.23,
"duration": 3.17
},
{
"text": "But if the min player is\nplaying intelligently,",
"start": 5957.4,
"duration": 2.58
},
{
"text": "if they're making the best\nmoves at each possible option",
"start": 5959.98,
"duration": 2.67
},
{
"text": "they have when they get to make\na choice, I'll be left with a 3,",
"start": 5962.65,
"duration": 3.72
},
{
"text": "whereas I could better,\nplaying optimally,",
"start": 5966.37,
"duration": 2.1
},
{
"text": "have guaranteed that I would get the 4.",
"start": 5968.47,
"duration": 3.25
},
{
"text": "So that doesn't affect\nthe logic that I would",
"start": 5971.72,
"duration": 1.88
},
{
"text": "use as a Minimax player trying to\nmaximize my score from that node there.",
"start": 5973.6,
"duration": 5.31
},
{
"text": "But it turns out, that took\nquite a bit of computation",
"start": 5978.91,
"duration": 2.4
},
{
"text": "for me to figure that out.",
"start": 5981.31,
"duration": 1.08
},
{
"text": "I had to reason through all of these\nnodes in order to draw this conclusion.",
"start": 5982.39,
"duration": 3.3
},
{
"text": "And this is for a pretty simple\ngame, where I have three choices,",
"start": 5985.69,
"duration": 3.09
},
{
"text": "my opponent has three choices,\nand then the game's over.",
"start": 5988.78,
"duration": 3.49
},
{
"text": "So what I'd like to do is come up\nwith some way to optimize this.",
"start": 5992.27,
"duration": 2.8
},
{
"text": "Maybe I don't need to do all of\nthis calculation to still reach",
"start": 5995.07,
"duration": 3.92
},
{
"text": "the conclusion that, you know what?",
"start": 5998.99,
"duration": 1.46
},
{
"text": "This action to the left--",
"start": 6000.45,
"duration": 1.5
},
{
"text": "that's the best that I could do.",
"start": 6001.95,
"duration": 1.86
},
{
"text": "Let's go ahead and try again and\ntry and be a little more intelligent",
"start": 6003.81,
"duration": 3.48
},
{
"text": "about how I go about doing this.",
"start": 6007.29,
"duration": 2.86
},
{
"text": "So first, I start the exact same way.",
"start": 6010.15,
"duration": 2.17
},
{
"text": "I don't know what to\ndo initially, so I just",
"start": 6012.32,
"duration": 1.84
},
{
"text": "have to consider one of the options and\nconsider what the min player might do.",
"start": 6014.16,
"duration": 4.77
},
{
"text": "Min has three options, 4, 8, and 5.",
"start": 6018.93,
"duration": 2.61
},
{
"text": "And between those three options,\nmin says, 4 is the best they can do,",
"start": 6021.54,
"duration": 3.96
},
{
"text": "because they want to try\nto minimize the score.",
"start": 6025.5,
"duration": 2.89
},
{
"text": "Now, I, the max player, will\nconsider my second option,",
"start": 6028.39,
"duration": 3.57
},
{
"text": "making this move here and considering\nwhat my opponent would do in response.",
"start": 6031.96,
"duration": 4.77
},
{
"text": "What will the min player do?",
"start": 6036.73,
"duration": 1.74
},
{
"text": "Well, the min player is going to, from\nthat state, look at their options.",
"start": 6038.47,
"duration": 3.09
},
{
"text": "And I would say, all right.",
"start": 6041.56,
"duration": 1.15
},
{
"text": "9 is an option, 3 is an option.",
"start": 6042.71,
"duration": 3.27
},
{
"text": "And if I am doing the math\nfrom this initial state,",
"start": 6045.98,
"duration": 2.24
},
{
"text": "doing all this calculation,\nwhen I see a 3,",
"start": 6048.22,
"duration": 3.18
},
{
"text": "that should immediately\nbe a red flag for me,",
"start": 6051.4,
"duration": 2.85
},
{
"text": "because when I see a 3\ndown here at this state,",
"start": 6054.25,
"duration": 2.64
},
{
"text": "I know that the value of this\nstate is going to be at most 3.",
"start": 6056.89,
"duration": 4.98
},
{
"text": "It's going to be 3 or\nsomething less than 3,",
"start": 6061.87,
"duration": 2.76
},
{
"text": "even though I haven't yet looked at\nthis last action or even further actions",
"start": 6064.63,
"duration": 3.42
},
{
"text": "if there were more actions\nthat could be taken here.",
"start": 6068.05,
"duration": 2.82
},
{
"text": "How do I know that?",
"start": 6070.87,
"duration": 1.03
},
{
"text": "Well, I know that the min player is\ngoing to try to minimize my score.",
"start": 6071.9,
"duration": 4.1
},
{
"text": "And if they see a 3, the only way\nthis could be something other than a 3",
"start": 6076.0,
"duration": 4.35
},
{
"text": "is if this remaining thing that I\nhaven't yet looked at is less than 3,",
"start": 6080.35,
"duration": 4.17
},
{
"text": "which means there is no way for this\nvalue to be anything more than 3,",
"start": 6084.52,
"duration": 4.29
},
{
"text": "because the min player\ncan already guarantee a 3,",
"start": 6088.81,
"duration": 2.58
},
{
"text": "and they are trying\nto minimize my score.",
"start": 6091.39,
"duration": 3.54
},
{
"text": "So what does that tell me?",
"start": 6094.93,
"duration": 1.45
},
{
"text": "Well, it tells me that\nif I choose this action,",
"start": 6096.38,
"duration": 2.36
},
{
"text": "my score is going to be 3, or maybe\neven less than 3, if I'm unlucky.",
"start": 6098.74,
"duration": 4.5
},
{
"text": "But I already know that this\naction will guarantee me a 4.",
"start": 6103.24,
"duration": 4.26
},
{
"text": "And so given that I know that this\naction guarantees me a score of 4,",
"start": 6107.5,
"duration": 3.73
},
{
"text": "and this action means I\ncan't do better than 3,",
"start": 6111.23,
"duration": 2.9
},
{
"text": "if I'm trying to maximize\nmy options, there",
"start": 6114.13,
"duration": 2.16
},
{
"text": "is no need for me to\nconsider this triangle here.",
"start": 6116.29,
"duration": 3.0
},
{
"text": "There is no value, no\nnumber that could go here,",
"start": 6119.29,
"duration": 2.7
},
{
"text": "that would change my mind\nbetween these two options.",
"start": 6121.99,
"duration": 2.77
},
{
"text": "I'm always going to opt for\nthis path that gets me a 4,",
"start": 6124.76,
"duration": 3.26
},
{
"text": "as opposed to this path, where\nthe best I can do is a 3,",
"start": 6128.02,
"duration": 3.24
},
{
"text": "if my opponent plays optimally.",
"start": 6131.26,
"duration": 2.34
},
{
"text": "And this is going to be true for all of\nthe future states that I look at, too.",
"start": 6133.6,
"duration": 3.25
},
{
"text": "But if I look over here, at what\nmin player might do over here,",
"start": 6136.85,
"duration": 2.62
},
{
"text": "if I see that this state is a 2, I\nknow that this state is at most a 2,",
"start": 6139.47,
"duration": 5.04
},
{
"text": "because the only way this value\ncould be something other than 2",
"start": 6144.51,
"duration": 3.99
},
{
"text": "is if one of these remaining\nstates is less than a 2,",
"start": 6148.5,
"duration": 3.32
},
{
"text": "and so the min player\nwould opt for that instead.",
"start": 6151.82,
"duration": 2.78
},
{
"text": "So even without looking\nat these remaining states,",
"start": 6154.6,
"duration": 2.87
},
{
"text": "I, as the maximizing player, can know\nthat choosing this path to the left",
"start": 6157.47,
"duration": 5.13
},
{
"text": "is going to be better than choosing\neither of those two paths to the right,",
"start": 6162.6,
"duration": 4.68
},
{
"text": "because this one can't be better than\n3, this one can't be better than 2,",
"start": 6167.28,
"duration": 4.53
},
{
"text": "and so 4 in this case is\nthe best that I can do.",
"start": 6171.81,
"duration": 4.74
},
{
"text": "And I can say now that this\nstate has a value of 4.",
"start": 6176.55,
"duration": 3.03
},
{
"text": "So in order to do this\ntype of calculation,",
"start": 6179.58,
"duration": 2.1
},
{
"text": "I was doing a little bit more\nbookkeeping, keeping track of things,",
"start": 6181.68,
"duration": 3.3
},
{
"text": "keeping track all the time of,\nwhat is the best that I can do,",
"start": 6184.98,
"duration": 3.5
},
{
"text": "what is the worst that I can do, and\nfor each of these states, saying,",
"start": 6188.48,
"duration": 2.92
},
{
"text": "all right, well, if I already\nknow that I can get a 4,",
"start": 6191.4,
"duration": 3.9
},
{
"text": "then if the best I can\ndo at this state is a 3,",
"start": 6195.3,
"duration": 2.76
},
{
"text": "no reason for me to consider it.",
"start": 6198.06,
"duration": 1.71
},
{
"text": "I can effectively prune this leaf\nand anything below it from the tree.",
"start": 6199.77,
"duration": 5.25
},
{
"text": "And it's for that reason this\napproach, this optimization to Minimax,",
"start": 6205.02,
"duration": 3.42
},
{
"text": "is called alpha-beta pruning.",
"start": 6208.44,
"duration": 2.07
},
{
"text": "Alpha and beta stand\nfor these two values",
"start": 6210.51,
"duration": 1.89
},
{
"text": "that you'll have to keep track\nof, the best you can do so far",
"start": 6212.4,
"duration": 2.55
},
{
"text": "and the worst you can do so far.",
"start": 6214.95,
"duration": 1.65
},
{
"text": "And pruning is the idea of, if I\nhave a big, long, deep search tree,",
"start": 6216.6,
"duration": 4.57
},
{
"text": "I might be able to search it\nmore efficiently if I don't",
"start": 6221.17,
"duration": 2.48
},
{
"text": "need to search through everything,\nif I can remove some of the nodes",
"start": 6223.65,
"duration": 3.42
},
{
"text": "to try and optimize the way that I\nlook through this entire search space.",
"start": 6227.07,
"duration": 5.1
},
{
"text": "So alpha-beta pruning can\ndefinitely save us a lot of time",
"start": 6232.17,
"duration": 3.33
},
{
"text": "as we go about the search process by\nmaking our searches more efficient.",
"start": 6235.5,
"duration": 3.93
},
{
"text": "But even then, it's still not\ngreat as games get more complex.",
"start": 6239.43,
"duration": 4.32
},
{
"text": "Tic-tac-toe, fortunately,\nis a relatively simple game,",
"start": 6243.75,
"duration": 3.24
},
{
"text": "and we might reasonably\nask a question like,",
"start": 6246.99,
"duration": 2.83
},
{
"text": "how many total possible\ntic-tac-toe games are there?",
"start": 6249.82,
"duration": 3.82
},
{
"text": "You can think about it.",
"start": 6253.64,
"duration": 0.96
},
{
"text": "You can try and estimate, how many\nmoves are there at any given point?",
"start": 6254.6,
"duration": 3.04
},
{
"text": "How many moves long can the game last?",
"start": 6257.64,
"duration": 1.86
},
{
"text": "It turns out there are about 255,000\npossible tic-tac-toe games that",
"start": 6259.5,
"duration": 6.75
},
{
"text": "can be played.",
"start": 6266.25,
"duration": 1.53
},
{
"text": "But compare that to a more\ncomplex game, something",
"start": 6267.78,
"duration": 2.46
},
{
"text": "like a game of chess, for example--",
"start": 6270.24,
"duration": 1.82
},
{
"text": "far more pieces, far more moves,\ngames that last much longer.",
"start": 6272.06,
"duration": 3.89
},
{
"text": "How many total possible\nchess games could there be?",
"start": 6275.95,
"duration": 2.99
},
{
"text": "It turns out that after\njust four moves each,",
"start": 6278.94,
"duration": 2.16
},
{
"text": "four moves by the white player,\nfour moves by the black player,",
"start": 6281.1,
"duration": 3.0
},
{
"text": "that there are 288\nbillion possible chess",
"start": 6284.1,
"duration": 3.18
},
{
"text": "games that can result from that\nsituation, after just four moves each.",
"start": 6287.28,
"duration": 4.02
},
{
"text": "And going even further.",
"start": 6291.3,
"duration": 1.03
},
{
"text": "If you look at entire chess games and\nhow many possible chess games there",
"start": 6292.33,
"duration": 3.05
},
{
"text": "could be as a result there,\nthere are more than 10",
"start": 6295.38,
"duration": 3.18
},
{
"text": "to the 29,000 possible chess\ngames, far more chess games",
"start": 6298.56,
"duration": 4.26
},
{
"text": "than could ever be considered.",
"start": 6302.82,
"duration": 1.59
},
{
"text": "And this is a pretty big problem for the\nMinimax algorithm, because the Minimax",
"start": 6304.41,
"duration": 3.6
},
{
"text": "algorithm starts with an initial state,\nconsiders all the possible actions",
"start": 6308.01,
"duration": 4.14
},
{
"text": "and all the possible actions\nafter that, all the way",
"start": 6312.15,
"duration": 3.42
},
{
"text": "until we get to the end of the game.",
"start": 6315.57,
"duration": 2.94
},
{
"text": "And that's going to be a\nproblem if the computer is",
"start": 6318.51,
"duration": 2.13
},
{
"text": "going to need to look through\nthis many states, which",
"start": 6320.64,
"duration": 2.46
},
{
"text": "is far more than any computer could ever\ndo in any reasonable amount of time.",
"start": 6323.1,
"duration": 5.58
},
{
"text": "So what do we do in order\nto solve this problem?",
"start": 6328.68,
"duration": 2.22
},
{
"text": "Instead of looking through\nall these states, which",
"start": 6330.9,
"duration": 2.08
},
{
"text": "is totally intractable for a computer,\nwe need some better approach.",
"start": 6332.98,
"duration": 3.44
},
{
"text": "And it turns out that better approach\ngenerally takes the form of something",
"start": 6336.42,
"duration": 3.18
},
{
"text": "called depth-limited Minimax.",
"start": 6339.6,
"duration": 2.4
},
{
"text": "Where normally Minimax\nis depth-unlimited--",
"start": 6342.0,
"duration": 2.74
},
{
"text": "we just keep going, layer\nafter layer, move after move,",
"start": 6344.74,
"duration": 2.48
},
{
"text": "until we get to the end of the game--",
"start": 6347.22,
"duration": 1.71
},
{
"text": "depth-limited Minimax is instead\ngoing to say, you know what?",
"start": 6348.93,
"duration": 2.99
},
{
"text": "After a certain number\nof moves-- maybe I'll",
"start": 6351.92,
"duration": 1.84
},
{
"text": "look 10 moves ahead, maybe I'll look\n12 moves ahead, but after that point,",
"start": 6353.76,
"duration": 3.63
},
{
"text": "I'm going to stop and not\nconsider additional moves that",
"start": 6357.39,
"duration": 2.96
},
{
"text": "might come after that,\njust because it would",
"start": 6360.35,
"duration": 1.84
},
{
"text": "be computationally intractable to\nconsider all of those possible options.",
"start": 6362.19,
"duration": 5.78
},
{
"text": "But what do we do after we\nget 10 or 12 moves deep,",
"start": 6367.97,
"duration": 2.76
},
{
"text": "and we arrive at a situation\nwhere the game's not over?",
"start": 6370.73,
"duration": 3.11
},
{
"text": "Minimax still needs a way to assign a\nscore to that game board or game state",
"start": 6373.84,
"duration": 4.27
},
{
"text": "to figure out what its\ncurrent value is, which",
"start": 6378.11,
"duration": 2.28
},
{
"text": "is easy to do if the\ngame is over, but not so",
"start": 6380.39,
"duration": 2.37
},
{
"text": "easy to do if the game is not yet over.",
"start": 6382.76,
"duration": 2.64
},
{
"text": "So in order to do that, we need\nto add one additional feature",
"start": 6385.4,
"duration": 2.55
},
{
"text": "to depth-limited Minimax\ncalled an evaluation function,",
"start": 6387.95,
"duration": 3.45
},
{
"text": "which is just some\nfunction that is going",
"start": 6391.4,
"duration": 1.77
},
{
"text": "to estimate the expected utility\nof a game from a given state.",
"start": 6393.17,
"duration": 4.83
},
{
"text": "So in a game like chess, if you\nimagine that a game value of 1",
"start": 6398.0,
"duration": 3.0
},
{
"text": "means white wins, negative 1 means\nblack wins, 0 means it's a draw,",
"start": 6401.0,
"duration": 4.95
},
{
"text": "then you might imagine that a score of\n0.8 means white is very likely to win,",
"start": 6405.95,
"duration": 5.31
},
{
"text": "though certainly not guaranteed.",
"start": 6411.26,
"duration": 1.74
},
{
"text": "And you would have an evaluation\nfunction that estimates",
"start": 6413.0,
"duration": 3.45
},
{
"text": "how good the game state happens to be.",
"start": 6416.45,
"duration": 3.0
},
{
"text": "And depending on how good\nthat evaluation function is,",
"start": 6419.45,
"duration": 3.27
},
{
"text": "that is ultimately what's going\nto constrain how good the AI is.",
"start": 6422.72,
"duration": 3.33
},
{
"text": "The better the AI is\nat estimating how good",
"start": 6426.05,
"duration": 3.03
},
{
"text": "or how bad any particular game\nstate is, the better the AI",
"start": 6429.08,
"duration": 3.36
},
{
"text": "is going to be able to play that game.",
"start": 6432.44,
"duration": 2.22
},
{
"text": "If the evaluation function\nis worse and not as good",
"start": 6434.66,
"duration": 2.25
},
{
"text": "as estimating what the\nexpected utility is,",
"start": 6436.91,
"duration": 2.76
},
{
"text": "then it's going to be\na whole lot harder.",
"start": 6439.67,
"duration": 2.01
},
{
"text": "And you can imagine trying to come\nup with these evaluation functions.",
"start": 6441.68,
"duration": 3.45
},
{
"text": "In chess, for example, you might\nwrite an evaluation function",
"start": 6445.13,
"duration": 2.76
},
{
"text": "based on how many pieces\nyou have, as compared",
"start": 6447.89,
"duration": 2.34
},
{
"text": "to how many pieces your\nopponent has, because each one",
"start": 6450.23,
"duration": 2.31
},
{
"text": "has a value in your evaluation function.",
"start": 6452.54,
"duration": 2.7
},
{
"text": "It probably needs to\nbe a little bit more",
"start": 6455.24,
"duration": 1.71
},
{
"text": "complicated than that to consider\nother possible situations that",
"start": 6456.95,
"duration": 3.45
},
{
"text": "might arise as well.",
"start": 6460.4,
"duration": 1.69
},
{
"text": "And there are many other\nvariants on Minimax",
"start": 6462.09,
"duration": 2.15
},
{
"text": "that add additional features in\norder to help it perform better",
"start": 6464.24,
"duration": 3.33
},
{
"text": "under these larger and more\ncomputationally intractable",
"start": 6467.57,
"duration": 2.76
},
{
"text": "situations, where we couldn't possibly\nexplore all of the possible moves,",
"start": 6470.33,
"duration": 4.29
},
{
"text": "so we need to figure out\nhow to use evaluation",
"start": 6474.62,
"duration": 2.61
},
{
"text": "functions and other techniques to be\nable to play these games, ultimately,",
"start": 6477.23,
"duration": 4.14
},
{
"text": "better.",
"start": 6481.37,
"duration": 0.99
},
{
"text": "But this now was a look at this kind\nof adversarial search, these search",
"start": 6482.36,
"duration": 3.12
},
{
"text": "problems where we have\nsituations where I am trying",
"start": 6485.48,
"duration": 3.39
},
{
"text": "to play against some sort of opponent.",
"start": 6488.87,
"duration": 2.49
},
{
"text": "And these search problems\nshow up all over the place",
"start": 6491.36,
"duration": 2.52
},
{
"text": "throughout artificial intelligence.",
"start": 6493.88,
"duration": 1.68
},
{
"text": "We've been talking a lot today about\nmore classical search problems,",
"start": 6495.56,
"duration": 3.15
},
{
"text": "like trying to find directions\nfrom one location to another.",
"start": 6498.71,
"duration": 3.37
},
{
"text": "But anytime an AI is faced with\ntrying to make a decision like,",
"start": 6502.08,
"duration": 3.51
},
{
"text": "what do I do now in order to\ndo something that is rational,",
"start": 6505.59,
"duration": 2.87
},
{
"text": "or do something that is intelligent,\nor trying to play a game,",
"start": 6508.46,
"duration": 2.76
},
{
"text": "like figuring out what move to\nmake, these sort of algorithms",
"start": 6511.22,
"duration": 2.61
},
{
"text": "can really come in handy.",
"start": 6513.83,
"duration": 1.87
},
{
"text": "It turns out that for tic-tac-toe,\nthe solution is pretty simple,",
"start": 6515.7,
"duration": 2.72
},
{
"text": "because it's a small game.",
"start": 6518.42,
"duration": 1.18
},
{
"text": "XKCD has famously put\ntogether a webcomic",
"start": 6519.6,
"duration": 2.87
},
{
"text": "where he will tell you exactly what\nmove to make as the optimal move",
"start": 6522.47,
"duration": 3.15
},
{
"text": "to make, no matter what\nyour opponent happens to do.",
"start": 6525.62,
"duration": 2.67
},
{
"text": "This type of thing is\nnot quite as possible",
"start": 6528.29,
"duration": 2.22
},
{
"text": "for a much larger game\nlike checkers or chess,",
"start": 6530.51,
"duration": 2.28
},
{
"text": "for example, where chess\nis totally computationally",
"start": 6532.79,
"duration": 2.61
},
{
"text": "intractable for most computers\nto be able to explore",
"start": 6535.4,
"duration": 2.52
},
{
"text": "all the possible states.",
"start": 6537.92,
"duration": 1.41
},
{
"text": "So we really need our AI to be\nfar more intelligent about how",
"start": 6539.33,
"duration": 4.32
},
{
"text": "they go about trying to\ndeal with these problems",
"start": 6543.65,
"duration": 2.2
},
{
"text": "and how they go about\ntaking this environment",
"start": 6545.85,
"duration": 2.33
},
{
"text": "that they find themselves\nin and ultimately",
"start": 6548.18,
"duration": 1.83
},
{
"text": "searching for one of these solutions.",
"start": 6550.01,
"duration": 2.7
},
{
"text": "So this, then, was a look at\nsearch and artificial intelligence.",
"start": 6552.71,
"duration": 3.0
},
{
"text": "Next time we'll take\na look at knowledge,",
"start": 6555.71,
"duration": 1.92
},
{
"text": "thinking about how it is that our AIs\nare able to know information, reason",
"start": 6557.63,
"duration": 3.6
},
{
"text": "about that information, and draw\nconclusions, all in our look at AI",
"start": 6561.23,
"duration": 3.96
},
{
"text": "and the principles behind it.",
"start": 6565.19,
"duration": 1.56
},
{
"text": "We'll see you next time.",
"start": 6566.75,
"duration": 1.75
}
]