File size: 7,158 Bytes
d157f08 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | 1
00:00:00,160 --> 00:00:00,540
OK.
2
00:00:00,570 --> 00:00:04,550
So welcome to 8.7 where we start training at ossify.
3
00:00:04,650 --> 00:00:09,500
So let's switch into Python lookbook now and get started actually in doing some training.
4
00:00:09,900 --> 00:00:10,290
OK.
5
00:00:10,290 --> 00:00:15,150
So welcome back to Python the book where we're going to start training our model.
6
00:00:15,270 --> 00:00:17,240
So let's go through this line by line.
7
00:00:17,280 --> 00:00:17,850
All right.
8
00:00:18,120 --> 00:00:23,530
So remember sites which is basically how many images we process in one batch.
9
00:00:23,760 --> 00:00:29,610
Effectively this depends on how much RAM of RAM you have available because this is a small data set.
10
00:00:29,610 --> 00:00:33,480
You can probably even get away with going up to 128.
11
00:00:33,520 --> 00:00:39,300
However let's put it at 32 to be safe in case you have little specs on your system.
12
00:00:39,330 --> 00:00:44,670
Epoxy usually we can use ten or twenty five even but for this demo let's just use one.
13
00:00:44,670 --> 00:00:48,140
So you can actually see it around and actually see what happens at the end.
14
00:00:48,480 --> 00:00:54,150
So now let's talk about model that fit and you have something called History equal model not fit.
15
00:00:54,240 --> 00:00:56,760
Now ignore history for a bit for a second.
16
00:00:56,810 --> 00:00:58,550
The key part here is model that fit.
17
00:00:58,560 --> 00:01:01,740
You don't actually need this history stuff here.
18
00:01:01,740 --> 00:01:04,710
This history is mainly needed to plot or graphs afterward.
19
00:01:05,100 --> 00:01:08,360
But for now you can you can create a model without doing this.
20
00:01:08,370 --> 00:01:11,040
You can simply run this line and it trains.
21
00:01:11,040 --> 00:01:18,480
So let's look at what inputs we need into model a lot of that we need to training data we need the training
22
00:01:18,480 --> 00:01:21,280
labels in the format we specified above.
23
00:01:21,570 --> 00:01:27,090
We need to specify OBOT size which is to to POCs which would be one vobis.
24
00:01:27,270 --> 00:01:30,000
Basically that just tells us how much information we want to see.
25
00:01:30,040 --> 00:01:37,590
While training I recommend Vivus one because it's nice to look at and provide more information and less
26
00:01:37,590 --> 00:01:41,190
lead validation data which would be extra us in the White us.
27
00:01:41,550 --> 00:01:46,410
And this has also appeared in a tuple as you can see just brackets basically make sure it's in shorts
28
00:01:46,410 --> 00:01:50,270
a tuple and that's what we fit a model on.
29
00:01:50,280 --> 00:01:52,620
And lastly let's take a look at this line.
30
00:01:52,650 --> 00:01:57,440
This is where we get the evaluation metrics at end of our training evaluate basically just gives us
31
00:01:57,440 --> 00:02:01,810
a score at the end at law school and accuracy accuracy score.
32
00:02:02,100 --> 00:02:05,390
So let's now run this and visualize how training is done.
33
00:02:07,900 --> 00:02:08,390
Amazing.
34
00:02:08,470 --> 00:02:12,800
So now I mean I find this quite interesting to watch but you may be bored.
35
00:02:12,880 --> 00:02:19,660
But right now this is basically going to add images here how many images we have seen so far in this
36
00:02:19,660 --> 00:02:26,040
data set the estimated time to completion two in two and a half minutes left a loss.
37
00:02:26,050 --> 00:02:30,460
And you can see it continuously going down slowly as she is pretty fast.
38
00:02:30,520 --> 00:02:34,280
To be fair and accuracy This is a part of like to watch too as well.
39
00:02:34,450 --> 00:02:38,130
Oh accuracy is slowly going up the more and more they do we fito model.
40
00:02:38,320 --> 00:02:42,010
And this is on one epoch and we're really at 60 percent accuracy.
41
00:02:42,070 --> 00:02:45,940
So if you wait to sell it I'm going to Fasold a video while we watch this.
42
00:02:46,270 --> 00:02:52,810
You can actually see after one book we actually reach quite good politician and training accuracy.
43
00:02:52,810 --> 00:02:58,390
However you don't see the validation accuracy and validation loss just yet because right now what's
44
00:02:58,390 --> 00:03:05,110
happening is that we're passing a data set of a training data set to a model and back propagating it
45
00:03:05,440 --> 00:03:06,530
and updating the weights.
46
00:03:06,640 --> 00:03:08,560
And that is how these things are improving.
47
00:03:08,620 --> 00:03:09,590
Losses going down.
48
00:03:09,640 --> 00:03:17,140
Accuracy going up and once one epoch is complete then we passed a test data into it and then we see
49
00:03:17,200 --> 00:03:23,840
a test of validation loss or Test loss and validation accuracy or test accuracy.
50
00:03:23,860 --> 00:03:25,250
So let's wait until it's done.
51
00:03:30,790 --> 00:03:31,670
OK good.
52
00:03:31,670 --> 00:03:32,110
There we go.
53
00:03:32,120 --> 00:03:33,040
We're finished.
54
00:03:33,260 --> 00:03:38,090
So now let's look at the results here and try to figure out how we analyze what had just happened.
55
00:03:38,090 --> 00:03:46,590
So as you can see after we fed all 60 images 60000 images into a model and train that updated awaits.
56
00:03:46,820 --> 00:03:52,580
We now have a loss that is decently low point five nine Well accuracy on our training data set.
57
00:03:52,580 --> 00:03:56,870
That's a point 8 1 4 5 8 1 and 1/2 percent roughly.
58
00:03:57,070 --> 00:04:02,850
And but interestingly our validation loss is even much lower than what treating loss and validation
59
00:04:02,900 --> 00:04:05,140
accuracy is 93 percent.
60
00:04:05,150 --> 00:04:09,030
Now that's not normal but it's a good problem to have.
61
00:04:09,050 --> 00:04:15,310
It means that our model has generalized very well and is actually quite accurate on our test data.
62
00:04:15,320 --> 00:04:22,170
Now this doesn't always happen usually always on most cases have higher accuracy and training then when
63
00:04:22,340 --> 00:04:24,410
the validation set.
64
00:04:24,530 --> 00:04:26,270
However let's wouldn't complain about this.
65
00:04:26,270 --> 00:04:27,850
This is a good problem to have.
66
00:04:28,280 --> 00:04:33,150
And basically we have a loss and test accuracy printed at the summary here.
67
00:04:33,470 --> 00:04:37,820
It's also the same values here but we just run this again just to basically see it.
68
00:04:37,870 --> 00:04:43,460
And if you have many ebox as well this is a summary at the end of the mall Okay.
69
00:04:43,730 --> 00:04:47,670
So now we're going to move on to plotting or loss accuracy charts.
70
00:04:47,900 --> 00:04:53,020
Basically what you should do is run this for maybe 10 epoxy can actually have nice graphs.
|