AI_DL_Assignment / 8. Build CNNs in Python using Keras /12. Building a Simple Image Classifier using CIFAR10.srt
| 1 | |
| 00:00:02,190 --> 00:00:07,970 | |
| OK so you've just finished looking at your honest dataset you're a first model. | |
| 2 | |
| 00:00:08,070 --> 00:00:13,050 | |
| So basically you've imported the data you've created your model you've trained it and if you've analyzed | |
| 3 | |
| 00:00:13,050 --> 00:00:17,310 | |
| the results you've seen that you've loaded it and you've tested it on some really low. | |
| 4 | |
| 00:00:17,640 --> 00:00:19,500 | |
| So now let's do the same for Safar 10. | |
| 5 | |
| 00:00:19,530 --> 00:00:23,290 | |
| We're not going to break it down into all the steps together and we're actually going to show you know | |
| 6 | |
| 00:00:23,310 --> 00:00:26,220 | |
| how we actually just do it all in one sequence of code. | |
| 7 | |
| 00:00:26,370 --> 00:00:29,180 | |
| Similarly to what we did at the end of the chapter. | |
| 8 | |
| 00:00:29,500 --> 00:00:31,230 | |
| But now a different data set. | |
| 9 | |
| 00:00:31,230 --> 00:00:32,840 | |
| So let's get to it. | |
| 10 | |
| 00:00:32,970 --> 00:00:35,290 | |
| But first let's talk a bit about c14. | |
| 11 | |
| 00:00:35,340 --> 00:00:41,430 | |
| So foughten basically it's a huge image data set to get support 60000 images as well or maybe more. | |
| 12 | |
| 00:00:41,790 --> 00:00:48,210 | |
| And basically it has 10 classes here Epley an automobile blade cat or dog frog horse chip truck. | |
| 13 | |
| 00:00:48,210 --> 00:00:53,760 | |
| These are some sample images in it and we're going to try and classify that actually the techs what | |
| 14 | |
| 00:00:53,760 --> 00:00:55,050 | |
| is seen in the image. | |
| 15 | |
| 00:00:55,050 --> 00:00:59,350 | |
| So if you showed a picture of a car should notice a car a dog should know it's a dog. | |
| 16 | |
| 00:00:59,580 --> 00:01:01,170 | |
| And for the other categories here. | |
| 17 | |
| 00:01:01,260 --> 00:01:02,430 | |
| So let's get to it. | |
| 18 | |
| 00:01:03,650 --> 00:01:04,020 | |
| OK. | |
| 19 | |
| 00:01:04,050 --> 00:01:11,110 | |
| So opening our 8.1 one building a CNN image justification Safad 10 that's a violation on the books. | |
| 20 | |
| 00:01:11,140 --> 00:01:12,350 | |
| The book full the here. | |
| 21 | |
| 00:01:12,990 --> 00:01:15,070 | |
| Basically I just defined the categories again. | |
| 22 | |
| 00:01:15,150 --> 00:01:17,360 | |
| And now we get straight to it. | |
| 23 | |
| 00:01:17,490 --> 00:01:19,110 | |
| Let's just go through this quickly. | |
| 24 | |
| 00:01:19,140 --> 00:01:22,650 | |
| We have all our imports here probably some that aren't even necessary. | |
| 25 | |
| 00:01:22,650 --> 00:01:25,270 | |
| I tend to do that sometimes and I'm testing stuff. | |
| 26 | |
| 00:01:25,350 --> 00:01:28,430 | |
| My boss gets irritated with me quite a bit for that. | |
| 27 | |
| 00:01:28,450 --> 00:01:29,840 | |
| But fair enough. | |
| 28 | |
| 00:01:29,850 --> 00:01:31,870 | |
| I still do it from time to time. | |
| 29 | |
| 00:01:31,920 --> 00:01:33,360 | |
| We have a size. | |
| 30 | |
| 00:01:33,420 --> 00:01:35,650 | |
| We have a number of classes we have. | |
| 31 | |
| 00:01:35,770 --> 00:01:40,930 | |
| He parks it in right here which we imported from here as well. | |
| 32 | |
| 00:01:40,980 --> 00:01:44,920 | |
| We just display our ships just to get a handle on what we're doing with our data. | |
| 33 | |
| 00:01:44,970 --> 00:01:51,840 | |
| It's always a good idea to see just quantify how many data sets are called because your dataset. | |
| 34 | |
| 00:01:51,900 --> 00:01:55,110 | |
| What's the shape what's the size of your test data set as well. | |
| 35 | |
| 00:01:55,530 --> 00:02:01,620 | |
| So then we just do a quick formatting here because if turn is true dimensions we do actually have to | |
| 36 | |
| 00:02:01,620 --> 00:02:03,040 | |
| add the mentioned on to it. | |
| 37 | |
| 00:02:03,050 --> 00:02:07,830 | |
| It just comes up automatically here and then we just change it to float. | |
| 38 | |
| 00:02:07,830 --> 00:02:14,580 | |
| We divide by 255 and we change the training to to test labels on the training labels too categorical | |
| 39 | |
| 00:02:14,580 --> 00:02:16,280 | |
| or hot one in coding it. | |
| 40 | |
| 00:02:16,530 --> 00:02:21,540 | |
| We define our model here which is basically this should be the same model as we did before. | |
| 41 | |
| 00:02:21,540 --> 00:02:28,740 | |
| But I think I've added in yet I did it in two more convolution layers here also what's different about | |
| 42 | |
| 00:02:28,740 --> 00:02:34,770 | |
| this model is that we have the two filters and the two filters here for the fisty convolutional is that | |
| 43 | |
| 00:02:34,780 --> 00:02:37,350 | |
| we have the activation defined outside of it. | |
| 44 | |
| 00:02:37,590 --> 00:02:43,620 | |
| I actually have done that on purpose because I remember when I was creating the stable model I actually | |
| 45 | |
| 00:02:43,620 --> 00:02:47,300 | |
| wanted to shoot is a variety of ways to actually create these models. | |
| 46 | |
| 00:02:47,310 --> 00:02:48,690 | |
| This isn't meant to confuse you. | |
| 47 | |
| 00:02:48,690 --> 00:02:53,940 | |
| This is meant to actually show you that Karatz is very flexible and you will find a lot of example Cara's | |
| 48 | |
| 00:02:53,970 --> 00:03:00,000 | |
| good where sometimes people find the activation as outside of the conflict here and some names is added | |
| 49 | |
| 00:03:00,010 --> 00:03:04,650 | |
| in here you can easily have just gone put activation equal real you here as well. | |
| 50 | |
| 00:03:04,770 --> 00:03:07,020 | |
| So going back to this we have to convert. | |
| 51 | |
| 00:03:07,050 --> 00:03:13,200 | |
| So a second call here really Max puling dropout and we have two more convolutional is here each with | |
| 52 | |
| 00:03:13,200 --> 00:03:19,330 | |
| 64 filters and activations relo Max spooling and drop out here again and we flatten everything. | |
| 53 | |
| 00:03:19,380 --> 00:03:22,260 | |
| And now we have a much larger dense layer here. | |
| 54 | |
| 00:03:22,500 --> 00:03:27,750 | |
| This puts the specific next to 512 notes and then it goes to reload again. | |
| 55 | |
| 00:03:27,810 --> 00:03:33,780 | |
| And then we have it connected here to this number of glasses which is 10 which we defined above and | |
| 56 | |
| 00:03:33,780 --> 00:03:36,260 | |
| we use a different optimizer. | |
| 57 | |
| 00:03:36,660 --> 00:03:41,370 | |
| Armis Propp and it's defined outside here we compile a model and print it. | |
| 58 | |
| 00:03:41,400 --> 00:03:44,120 | |
| So let's print this and see how it looks. | |
| 59 | |
| 00:03:44,520 --> 00:03:45,210 | |
| Very nice. | |
| 60 | |
| 00:03:45,210 --> 00:03:48,740 | |
| And as you can see even though this model is more complicated. | |
| 61 | |
| 00:03:48,750 --> 00:03:51,890 | |
| Number of promises doesn't increase that much which is good to know. | |
| 62 | |
| 00:03:52,140 --> 00:03:56,630 | |
| It's always good to know to check a number of parameters in the model because the more premises they | |
| 63 | |
| 00:03:56,640 --> 00:03:58,700 | |
| are usually the longer it takes to train. | |
| 64 | |
| 00:03:59,740 --> 00:04:04,440 | |
| And you can train your model here and we just renamed it so I don't delete the previous models trend | |
| 65 | |
| 00:04:05,260 --> 00:04:06,800 | |
| and basically run this. | |
| 66 | |
| 00:04:06,850 --> 00:04:08,160 | |
| I'm going to run this now. | |
| 67 | |
| 00:04:10,340 --> 00:04:11,290 | |
| There we go. | |
| 68 | |
| 00:04:11,340 --> 00:04:16,920 | |
| Sick as you can see because the images are bigger even though this data set as you saw has only 50000 | |
| 69 | |
| 00:04:16,940 --> 00:04:21,240 | |
| I said 6:6 it doesn't before but does because I added the test and training. | |
| 70 | |
| 00:04:22,010 --> 00:04:30,130 | |
| So we are training on 50000 images here and it's honestly not going that slow for CPE use a super training. | |
| 71 | |
| 00:04:30,230 --> 00:04:36,380 | |
| We're going to do maybe about 10 epoxied I believe I said it at Ahwahnee book actually just for experimental | |
| 72 | |
| 00:04:36,380 --> 00:04:40,100 | |
| purposes and you probably you're already seeing that. | |
| 73 | |
| 00:04:40,490 --> 00:04:46,560 | |
| Basically an untrained classifier basically guessing would give you 10 percent accuracy one intentions | |
| 74 | |
| 00:04:47,120 --> 00:04:53,210 | |
| and we're already close to 1 in 2 chance of 20 percent accuracy one in five chance of getting it right. | |
| 75 | |
| 00:04:53,250 --> 00:04:56,570 | |
| When the training data sets you can see our model models slowly improving and we have a long way to | |
| 76 | |
| 00:04:56,570 --> 00:04:57,160 | |
| go. | |
| 77 | |
| 00:04:57,170 --> 00:04:58,510 | |
| We just want it back. | |
| 78 | |
| 00:04:58,790 --> 00:05:03,040 | |
| So what leaves us I'll leave this view as an exercise. | |
| 79 | |
| 00:05:03,100 --> 00:05:08,450 | |
| Once you two actually trained us for different monkeypox training maybe even change up some parameters | |
| 80 | |
| 00:05:08,450 --> 00:05:09,940 | |
| here to start playing with us. | |
| 81 | |
| 00:05:09,950 --> 00:05:15,650 | |
| It's quite fun playing with learning learning rates as well and can change it to the kids. | |
| 82 | |
| 00:05:15,650 --> 00:05:17,480 | |
| Basically how much the leading grade decreases. | |
| 83 | |
| 00:05:17,480 --> 00:05:20,510 | |
| I'll explain these concepts later on in others lives. | |
| 84 | |
| 00:05:20,600 --> 00:05:26,030 | |
| But for now I just know that we have a variety of things we can tweak and deep learning is basically | |
| 85 | |
| 00:05:26,540 --> 00:05:32,750 | |
| people say it's an art form and it kind of is because it's so many variations variations that are dependent | |
| 86 | |
| 00:05:32,750 --> 00:05:33,550 | |
| on each other. | |
| 87 | |
| 00:05:33,980 --> 00:05:40,520 | |
| You can simply change some layers here a number of layers change sizes change this and there's no hard | |
| 88 | |
| 00:05:40,520 --> 00:05:43,930 | |
| science that defines what to do to get the best results. | |
| 89 | |
| 00:05:44,060 --> 00:05:49,490 | |
| Mainly because it all depends on your dataset and your dataset is basically naturally occurring images | |
| 90 | |
| 00:05:49,940 --> 00:05:53,190 | |
| which have so much naturally a naturally occurring variety in them. | |
| 91 | |
| 00:05:53,570 --> 00:05:58,970 | |
| So it's once you can understand your data you can start figuring out how you should treat these parameters | |
| 92 | |
| 00:05:59,470 --> 00:06:04,700 | |
| and I'll discuss these things later on in the slides as we build more and more complicated pacifies | |
| 93 | |
| 00:06:07,430 --> 00:06:11,730 | |
| so left it could be to plot your charts here and you can run some tests as well. | |
| 94 | |
| 00:06:11,730 --> 00:06:14,440 | |
| So for now I'm going to just stop this quickly | |
| 95 | |
| 00:06:17,950 --> 00:06:24,970 | |
| what I'm going to do is I'm going to just load Hopefully all the imports I needed a little model train | |
| 96 | |
| 00:06:24,990 --> 00:06:27,330 | |
| before and let's see if it works. | |
| 97 | |
| 00:06:28,590 --> 00:06:31,110 | |
| Now it did not work because X isn't defined. | |
| 98 | |
| 00:06:31,110 --> 00:06:36,780 | |
| So what we can do is just quickly run this block here because X to us and X train and all those things | |
| 99 | |
| 00:06:36,780 --> 00:06:37,310 | |
| are here. | |
| 100 | |
| 00:06:39,000 --> 00:06:42,400 | |
| And we can run this and this brings up this window. | |
| 101 | |
| 00:06:42,570 --> 00:06:44,430 | |
| Here it is. | |
| 102 | |
| 00:06:44,430 --> 00:06:45,640 | |
| So clearly this isn't a frog. | |
| 103 | |
| 00:06:45,690 --> 00:06:49,550 | |
| This is probably a bit and then to it. | |
| 104 | |
| 00:06:49,560 --> 00:06:50,510 | |
| This is not a dog. | |
| 105 | |
| 00:06:50,760 --> 00:06:51,550 | |
| This is a dog. | |
| 106 | |
| 00:06:51,580 --> 00:06:53,010 | |
| This is a horse not a cat. | |
| 107 | |
| 00:06:53,010 --> 00:06:53,510 | |
| This is a frog. | |
| 108 | |
| 00:06:53,520 --> 00:06:55,130 | |
| Yes automobile. | |
| 109 | |
| 00:06:55,140 --> 00:06:55,780 | |
| Yes. | |
| 110 | |
| 00:06:55,980 --> 00:06:57,420 | |
| Yes automobile. | |
| 111 | |
| 00:06:57,570 --> 00:06:59,290 | |
| Not a frog. | |
| 112 | |
| 00:07:00,180 --> 00:07:05,580 | |
| So we can see businesses 10 10 samples this model they trained here before which I probably trained | |
| 113 | |
| 00:07:05,610 --> 00:07:09,560 | |
| maybe just 10 ebox has about 50 percent accuracy. | |
| 114 | |
| 00:07:09,660 --> 00:07:16,140 | |
| So let's see if he can be debt accuracy on Safar has reached very very high like 99 percent. | |
| 115 | |
| 00:07:16,140 --> 00:07:17,250 | |
| Let's see if you can get that. | |
| 116 | |
| 00:07:17,250 --> 00:07:18,510 | |
| Get it out on your own. | |
| 117 | |
| 00:07:18,750 --> 00:07:19,570 | |
| Good luck. | |