File size: 10,862 Bytes
e62bc71 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 | 1
00:00:00,750 --> 00:00:06,480
Hi and welcome to Chapter 15 point tree where we're about to build a flow of classify and we're going
2
00:00:06,480 --> 00:00:08,580
to use transfer learning to do this.
3
00:00:08,580 --> 00:00:10,280
So let's take a look at how we actually.
4
00:00:10,370 --> 00:00:13,230
What is Flora ossify our flower data set.
5
00:00:13,230 --> 00:00:14,920
I should say so.
6
00:00:15,060 --> 00:00:20,150
It comes from the Oxford University's visual geometry group as called 17.
7
00:00:20,310 --> 00:00:26,270
And that's because there are 17 categories of flowers and their images in each class say the sets and
8
00:00:26,270 --> 00:00:27,200
not that much.
9
00:00:28,110 --> 00:00:33,900
So this is some sample images from the flow Josephite the flowers 17 there is that this is the web page
10
00:00:33,900 --> 00:00:34,940
from Oxford University.
11
00:00:34,950 --> 00:00:40,170
And this is the link you can go to if you want to download it from day itself or you can use that link
12
00:00:40,230 --> 00:00:44,580
I have on the left here on the demi site panel.
13
00:00:44,640 --> 00:00:49,440
Please use that link to actually download it because I've already preprocess the data into a format
14
00:00:49,470 --> 00:00:54,330
that is easily imported into Karris if you downloaded from Oxford University site you're going to have
15
00:00:54,330 --> 00:00:55,760
to do a pre-processing itself.
16
00:00:55,770 --> 00:01:00,630
And I don't think if you're a beginner you're not going to find that fun at all although it's a good
17
00:01:00,630 --> 00:01:02,740
exercise to do sometimes.
18
00:01:03,600 --> 00:01:08,790
So anyway our approach to this problem is that we're going to actually use a pre-trained Fiji A16 model
19
00:01:09,540 --> 00:01:14,490
with all of its way it's frozen except the top layer and we're only going to train the top ahead of
20
00:01:14,490 --> 00:01:17,490
the model with a final output of 17 classes.
21
00:01:17,490 --> 00:01:21,370
So let's go back to our I and notebook and get this done.
22
00:01:21,710 --> 00:01:22,100
OK.
23
00:01:22,140 --> 00:01:24,750
So welcome back to our virtual machine.
24
00:01:24,780 --> 00:01:28,820
I hope you downloaded the flowers dataset and extracted it to this folder here.
25
00:01:29,040 --> 00:01:34,170
That's this folder called transfer linning and financing and Plaisted right here so we can quickly just
26
00:01:34,170 --> 00:01:37,910
inspect it taking a look at some of those pictures.
27
00:01:38,330 --> 00:01:42,120
Let's put it on toenail view and it looks quite nice.
28
00:01:42,120 --> 00:01:46,280
So as you can see we don't have that many images in this data set.
29
00:01:46,380 --> 00:01:51,380
So let's see what kind of accuracy we can get without transfer learning on the Viji model.
30
00:01:51,390 --> 00:01:53,380
So let's go to it here.
31
00:01:53,790 --> 00:02:02,170
So no let me just close some of these windows open and let's quickly go back to this one here so you
32
00:02:02,170 --> 00:02:03,350
can actually see how I do it.
33
00:02:03,360 --> 00:02:05,080
It's 15.
34
00:02:05,080 --> 00:02:07,090
And we go to making a flower classifier.
35
00:02:07,210 --> 00:02:08,440
That's this file here.
36
00:02:08,830 --> 00:02:10,260
So now that we're in the file.
37
00:02:10,300 --> 00:02:11,800
Let's take a look at what's going on.
38
00:02:11,800 --> 00:02:15,770
So we import the BTG model that's easily done here.
39
00:02:16,120 --> 00:02:23,470
Viji was designed to work open 24 or 224 by 224 pixel image input's Isiah's.
40
00:02:23,500 --> 00:02:26,450
So let's keep the standard size and go forward.
41
00:02:26,530 --> 00:02:32,200
So let's load the model with out his weights or with the weights of image's nuts without the top layer.
42
00:02:32,410 --> 00:02:34,360
I should say so we do that.
43
00:02:34,420 --> 00:02:36,960
And let's just print out the layers in this model.
44
00:02:37,060 --> 00:02:37,560
OK.
45
00:02:37,930 --> 00:02:44,740
So as you can see default actually is loaded here and by default all the layers are trainable.
46
00:02:44,740 --> 00:02:52,370
True that means the default in of when you load EGD all the weights are trainable.
47
00:02:52,630 --> 00:02:55,090
So we now have to set this true to false.
48
00:02:55,090 --> 00:02:56,490
So that's what we do here.
49
00:02:56,860 --> 00:03:03,010
So we loaded with our top head with Image net weights and we set all the treatable as we said this flag
50
00:03:03,090 --> 00:03:04,210
to false.
51
00:03:04,270 --> 00:03:08,030
So let's do this quickly and that's done there.
52
00:03:08,520 --> 00:03:13,450
And now let's create the function where we add a fully connected head.
53
00:03:13,510 --> 00:03:17,960
This is where we delay as we add now back to the top of our Viji that network.
54
00:03:18,190 --> 00:03:24,340
Notice this is different to the layers we added in the mobile network and that's because PDG has a different
55
00:03:24,340 --> 00:03:26,000
design to mobile and that.
56
00:03:26,020 --> 00:03:30,190
So you're going to have to look at the final design BTG and replace easily as here.
57
00:03:30,340 --> 00:03:35,700
And this here this densely a number of densely as dense units here.
58
00:03:36,190 --> 00:03:38,440
By default we are going to use 256.
59
00:03:38,440 --> 00:03:47,550
However this function allows us to specify it in here we can add 128 and it would be 128 units here.
60
00:03:47,890 --> 00:03:50,480
So let's leave the default right.
61
00:03:50,500 --> 00:03:57,220
And then you said drop out who said these things we input a number of classes which is 17 from the flow
62
00:03:57,220 --> 00:04:01,450
was data set 17 17 Sivam should make sense you know.
63
00:04:01,780 --> 00:04:04,730
And we just concatenated models here.
64
00:04:05,110 --> 00:04:08,800
Well the parts of the model to get the full model and then printed out.
65
00:04:08,800 --> 00:04:13,690
So let's take a look at and we see there 14 million parameters.
66
00:04:13,880 --> 00:04:18,150
It's less than between 19 and 16 sorry BTD 19.
67
00:04:18,440 --> 00:04:23,180
And with treatable parameters only 135 tells him that's quite good.
68
00:04:23,720 --> 00:04:25,060
So let me just run this.
69
00:04:25,130 --> 00:04:33,150
So we have fresh and no we just do it data generators here to deflower validation and Floetry unfold
70
00:04:33,250 --> 00:04:35,290
as we said our size.
71
00:04:35,320 --> 00:04:38,210
We can go actually just keep it at 16.
72
00:04:38,490 --> 00:04:38,910
All right.
73
00:04:38,950 --> 00:04:43,140
And keep going here.
74
00:04:43,260 --> 00:04:49,500
So now we declare all callbacks right here and we just create we create a callback array which we pass
75
00:04:49,500 --> 00:04:51,740
in here and let's run this now.
76
00:04:51,850 --> 00:04:55,430
So I to leave you to run this over and run this already.
77
00:04:55,450 --> 00:04:56,800
And it takes quite some time.
78
00:04:57,040 --> 00:05:01,540
But what I want you to observe is look at the validation accuracy in 25 books.
79
00:05:01,540 --> 00:05:06,230
The highest we get was actually 95 percent which is quite good.
80
00:05:06,820 --> 00:05:11,500
So you keep going see did it ever pass 95 tree at one time.
81
00:05:11,560 --> 00:05:12,990
So this is quite good.
82
00:05:13,240 --> 00:05:19,370
So we've got 95 percent accuracy using transfer linning using Viji 16 in translating.
83
00:05:19,630 --> 00:05:22,710
So let's keep going let's see what else we can do.
84
00:05:22,750 --> 00:05:24,080
OK.
85
00:05:24,430 --> 00:05:26,020
So this section here.
86
00:05:26,020 --> 00:05:27,620
Can we speed this up.
87
00:05:27,730 --> 00:05:31,060
So let's try resizing the images to 64 by 64.
88
00:05:31,200 --> 00:05:34,820
You remember it was assigned to a can 224 224.
89
00:05:34,910 --> 00:05:37,660
Now let's do this to 64.
90
00:05:37,930 --> 00:05:44,100
So let's use this comment to setting the input size.
91
00:05:44,100 --> 00:05:49,660
Now to.
92
00:05:49,780 --> 00:05:55,670
All right and do the standard thing where we load with image that way it's we don't include the top
93
00:05:55,780 --> 00:06:01,810
specified in U shape and we make the last train with three syllables.
94
00:06:02,190 --> 00:06:04,040
So that's good.
95
00:06:04,050 --> 00:06:07,050
And now let's move on to this.
96
00:06:07,460 --> 00:06:13,330
Let us actually start treating the small so as we can see this model has a different input sites.
97
00:06:14,180 --> 00:06:16,010
And let's see what we get.
98
00:06:16,010 --> 00:06:18,940
So I've trained this before so you don't have to do it.
99
00:06:18,950 --> 00:06:26,180
So what I want you to see though is that what what's happened here previously before actually did not
100
00:06:26,180 --> 00:06:30,130
used the callbacks or that's it in view but I should have thought it and I.
101
00:06:30,410 --> 00:06:32,490
But what I've done now is a discipline we do.
102
00:06:32,540 --> 00:06:41,660
So we see some callbacks feedback from stopping so we see it's not increasing monitoring patients is
103
00:06:41,660 --> 00:06:42,310
good.
104
00:06:42,320 --> 00:06:45,740
So at the end Epopt 12 is what we use.
105
00:06:45,770 --> 00:06:49,530
So let's go back to Iraq 12 pastorate ago.
106
00:06:49,920 --> 00:06:53,210
That's this one 82 percent.
107
00:06:53,230 --> 00:06:58,340
So 82 percent was our best loess validation loss and our best accuracy.
108
00:06:58,340 --> 00:07:06,500
So you can see by resizing the images a 64 by 64 which is a substantial decrease in size 2 to 24 by
109
00:07:06,500 --> 00:07:09,580
224 we got it into possessory.
110
00:07:09,860 --> 00:07:10,860
How much was it again.
111
00:07:11,520 --> 00:07:11,850
Sorry.
112
00:07:11,950 --> 00:07:13,930
82 percent accuracy.
113
00:07:14,060 --> 00:07:20,570
So that's not too bad to be fair actually sorry 86 percent accuracy we got that was fifteen point five
114
00:07:20,570 --> 00:07:22,150
six five two.
115
00:07:22,370 --> 00:07:22,730
Right.
116
00:07:22,730 --> 00:07:24,540
So that is actually this one.
117
00:07:25,010 --> 00:07:26,140
So yep.
118
00:07:26,150 --> 00:07:27,620
So this is good.
119
00:07:27,710 --> 00:07:29,960
It's not great but is pretty good.
|