# Deep Learning and Reinforcement Learning Week 5

**Course Name: Deep Learning and Reinforcement Learning**

**Course Link: Deep Learning and Reinforcement Learning**

#### These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz

### Practice: Transfer Learning

**Q1. The main idea of transfer learning of a neural network is:**

To keep the early layers of a pre-trained network and re-train the later layers for a specific application.

To use the early layers to capture features that are more particular to the specific data you are trying to classify.

To train the early layers such that their weights have a higher impact on the final result.

To re-train the early layers for a specific application and transfer it to a different data set

**Answer: To keep the early layers of a pre-trained network and re-train the later layers for a specific application.**

**Q2. In the context of transfer learning, which is a guiding principle of fine tuning?**

Fine tuning the hyperparameters of the CNNs

Using data that is similar to the pre-trained network

Adjust the weights of the neural network

Increase the number of later layers iteratively

**Answer: Using data that is similar to the pre-trained network**

**Q3. In the context of transfer learning, what do we call the process in which you only train the last or a few layers instead of all layers of a neural network?**

Frozen layers

Frozen weights

Updated learning

Updated layers

**Answer: Frozen layers**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

### Practice: Convolutional Neural Network Architectures

**Q1. This concept came as a solution to CNNs in which each layer is turned into branches of convolutions:**

Inception

Workload portion

Hebbian Principle

Network Concatenation

**Answer: Inception**

**Q2. Which CNN Architecture is considered the flash point for modern Deep Learning?**

AlexNet

VGG

Inception

ResNet

LeNet

**Answer: AlexNet**

**Q3. Which CNN Architecture can be described as a “simplified, deeper LeNet” in which the more layers, the better?**

Deep Lenet

AlexNet

VGG

Inception

ResNet

**Answer: VGG**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

**Q4. Which CNN Architecture is the precursor of using convolutions to obtain better features and was first used to solve the MNIST data set?**

AlexNet

VGG

Inception

ResNet

LeNet

**Answer: LeNet**

**Q5. The motivation behind this CNN Architecture was to solve the inability of deep neural networks to fit or overfit the training data better when adding layers.**

LeNet

AlexNet

VGG

Inception

ResNet

**Answer: ResNet**

**Q6. This CNN Architecture keeps passing both the initial unchanged information and the transformed information to the next layer.**

LeNet

AlexNet

VGG

Inception

ResNet

**Answer: ResNet**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

**Q7. Which activation function was notably used in AlexNet and contributed to its success?**

ReLU (Rectified Linear Unit)

Sigmoid

Tanh

Leaky ReLU

**Answer: ReLU (Rectified Linear Unit)**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

### Practice: Regularization

**Q1. Which regularization technique can shrink the coefficients of the less important features to zero?**

L1

Dropout

L2

Batch Normalization

**Answer: L1**

**Q2. (True/False) Batch Normalization tackles the internal covariate shift issue by always normalizing the input signals, thus accelerating the training of deep neural nets and increasing the generalization power of the networks.**

True

False

**Answer: True**

**Q3. Regularization is used to mitigate which issue in model training?**

Both underfitting and overfitting

High bias and low variance

Overfitting

Underfitting

**Answer: Overfitting**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

### Final Quiz

**Q1. (True/False) In Keras, the Dropout layer has an argument called rate, which is a probability that represents how often we want to invoke the layer in the training.**

True

False

**Answer: False**

**Q2. What is a benefit of applying transfer learning to neural networks? **

Train early layers for specific applications and generalize that with later pre-trained layers.

Save early layers for generalization before re-training later layers for specific applications.

Easily adjust weights of early layers to reduce training time.

Place heavy focus on training layers that generalize the model.

**Answer: Save early layers for generalization before re-training later layers for specific applications. **

**Q3. By setting ` layer.trainable = False` for certain layers in a neural network, we____ **

exclude the layers during training because they should be discarded

freeze the layers such thattheir weights change synchronously during training.

set the layers’ weights to zero

freeze the layers such that their weights don’t update during training.

**Answer: freeze the layers such that their weights don’t update during training. **

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

**Q4. Which option correctly orders the steps of implementing transfer learning? **

1. Freeze the early layers of the pre-trained model.

2. Improve the model by fine-tuning.

3. Train the model with a new output layer in place.

4. Select a pre-trained model as the base of our training.

3, 2, 4, 1

4, 2, 3, 1

3, 1, 2, 4

4, 1, 3, 2

**Answer: 4, 1, 3, 2**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

**Q5. Given a 100×100 pixels RGB image, there are _____ features. **

300

100

10000

30000

**Answer: 30000**

**Q6. Before a CNN is ready for classifying images, what layer must we add as the last? **

Dense layer with the number of units corresponding to (number of classes*input size)

Dense layer with the number of units corresponding to the number of classes

Flattening layer with the number of units corresponding to the number of classes

Flattening layer with the number of units corresponding to (number of classes*input size)

**Answer: Dense layer with the number of units corresponding to the number of classes **

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

**Q7. In a CNN, the depth of a layer corresponds to the number of: **

color channels

input layers

filters applied

channel-filter combinations

**Answer: filters applied**

**These are answers of Deep Learning and Reinforcement Learning Week 5 Quiz**

More Weeks of this course: Click Here

More Coursera Courses: http://progiez.com/coursera