File size: 5,614 Bytes
d157f08 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 | 1
00:00:00,300 --> 00:00:06,120
So in that we move on to a simple step where we actually build uncompiled or model and Karris and this
2
00:00:06,120 --> 00:00:08,950
is a model we're going to build as you remembered.
3
00:00:09,000 --> 00:00:15,510
We showed you this before you actually Edley is how we actually do max booing Flaten dance classes and
4
00:00:15,510 --> 00:00:16,080
stuff.
5
00:00:16,080 --> 00:00:22,080
This is actually quite useful diagram I created that shows you how to basically layer and add each piece
6
00:00:22,400 --> 00:00:27,090
here including drop out and including a flattened layer here as well.
7
00:00:27,420 --> 00:00:32,400
So let's go ahead and Kerrison run this and compile our model.
8
00:00:32,800 --> 00:00:35,110
OK so welcome back to Python book.
9
00:00:35,140 --> 00:00:40,830
So I'm pretty sure this code looks familiar because you would have seen it in the presentation slides
10
00:00:40,870 --> 00:00:41,660
earlier.
11
00:00:42,050 --> 00:00:45,580
So remember I said we're building a simple convolution neural nets.
12
00:00:45,580 --> 00:00:46,420
This is how we do it.
13
00:00:46,450 --> 00:00:48,720
We have two little filters in a convoy.
14
00:00:49,050 --> 00:00:51,730
Kenneled say as tree by tree activation really.
15
00:00:52,090 --> 00:00:55,450
We have the input ship which we defined above.
16
00:00:55,450 --> 00:01:01,930
Here actually it was in this block right here and then moving on we have a convolutional the sim can
17
00:01:01,930 --> 00:01:03,680
also use activation.
18
00:01:03,760 --> 00:01:06,070
We have to max beling downsampling liya.
19
00:01:06,280 --> 00:01:09,000
We now use dropout and dropout as simple to implement.
20
00:01:09,040 --> 00:01:14,190
You just muddle and dropout and especially if we use Pia's point to 5 here.
21
00:01:14,620 --> 00:01:21,730
We didn't flatten this layer have a fully connected DENSELOW with 128 nodes which really we add another
22
00:01:21,730 --> 00:01:26,740
layer of dropout and we have a higher dropout here notes you can play with these values and see what
23
00:01:26,740 --> 00:01:27,940
gives you the best.
24
00:01:27,940 --> 00:01:33,110
You tend to stop drop outs smaller on top and get larger but you don't ever really go larger than point
25
00:01:33,110 --> 00:01:33,950
five.
26
00:01:34,540 --> 00:01:40,840
And then this is our last Insley which is connected to 10 nodes which is a number of classes and a number
27
00:01:40,840 --> 00:01:43,340
of classes was defined right here.
28
00:01:43,990 --> 00:01:51,880
That's basically the linked number of columns I should see in this array and that's it.
29
00:01:51,880 --> 00:01:54,310
So I talked about compiling a model.
30
00:01:54,310 --> 00:02:00,430
Now when we when we do model the compile Basically we're taking these layers creating a model and then
31
00:02:00,430 --> 00:02:06,400
specifying what type of loss we are using what type of optimize are we using and the metrics we need
32
00:02:06,400 --> 00:02:10,780
to look at and these metrics we look at basically the metrics that will be output.
33
00:02:10,780 --> 00:02:12,460
When we started to train our model.
34
00:02:12,580 --> 00:02:18,760
When we start training our model and by doing model print here we can print a model summary and take
35
00:02:18,760 --> 00:02:19,150
a look.
36
00:02:19,150 --> 00:02:20,460
So let's check it out.
37
00:02:22,310 --> 00:02:23,830
This is very cool here.
38
00:02:23,870 --> 00:02:25,540
Let's go through this quickly.
39
00:02:25,550 --> 00:02:33,880
So when you print a model we actually see each layer we have is each layer we specified above here.
40
00:02:34,100 --> 00:02:37,580
And what's cool about this is that it gives you the output shape.
41
00:02:37,580 --> 00:02:43,640
So we know the input chip coming into this input shape was 28 by 28 dimension.
42
00:02:43,730 --> 00:02:47,230
Now 81 and we have 22 filters here.
43
00:02:47,520 --> 00:02:48,110
So.
44
00:02:48,180 --> 00:02:55,610
But shape if you remember correctly is going to be 26 26 32 would tell you to be number of filters and
45
00:02:55,610 --> 00:02:59,280
these are a number number of parameters in this lead here.
46
00:02:59,300 --> 00:03:00,680
So this is pretty cool.
47
00:03:00,680 --> 00:03:03,540
So now we have second convolutional here.
48
00:03:03,890 --> 00:03:08,150
Same thing again but no a lot more promises going forward.
49
00:03:08,150 --> 00:03:14,930
We have Omak spooling here which is this process no parameters drop it again flatten again and flatten
50
00:03:14,930 --> 00:03:19,760
has an upward shape of this as you can see it's just this expanded into this.
51
00:03:19,990 --> 00:03:25,810
We have a fully connected densely here and this is where the bulk of the parameters actually exist.
52
00:03:26,240 --> 00:03:30,870
And we have dropped it again and in a fully Densa connected to attend classes here.
53
00:03:31,460 --> 00:03:36,140
So this gives us a total number of parameters total number of treatable parameters.
54
00:03:36,140 --> 00:03:38,640
We have zero nonrenewable parameters.
55
00:03:38,660 --> 00:03:42,420
We will come to what non-tradable parameters all later on.
56
00:03:42,530 --> 00:03:44,380
And so we're not ready to train our model.
57
00:03:44,420 --> 00:03:45,410
So let's get to it.
|