Responsible and Safe AI Systems Nptel Week 5 Answers
Are you looking for Responsible and Safe AI Systems Nptel Week 5 Answers ? All weeks solutions of this Swayam course are available here.
Table of Contents

Responsible and Safe AI Systems Nptel Week 5 Answers (July-Dec 2025)
Course link: Click here to visit course on Nptel Website
Question 1. What is considered an ideal Stereotype Score (ss)?
a) 0%
b) 25%
c) 50%
d) 75%
Question 2. What does SEAT stand for?
a) Standard Evaluation Assessment Test
b) Semantic Evaluation Annotation Test
c) Structured Embedding Accuracy Test
d) Sentence Embedding Association Test
Question 3. In counterfactual data augmentation (CDA), what is altered to rebalance the corpus?
a) Sentence Length
b) Syntax
c) Bias attribute words
d) Vocabulary complexity
Question 4. Which characteristic makes a language model more likely to generate gender-neutral responses in text-to-text tasks? (Select all that apply.)
a) Training on diverse and balanced datasets.
b) Use of bias-specific adapter modules.
c) Conditioning outputs on explicit gender tokens.
d) Reliance on pretrained token embeddings without fine-tuning.
Question 5. Which toolkit is used to add programmable guardrails to LLM-based conversational applications like ChatGPT?
a) GPT-4
b) NVIDIA NeMo
c) CoDi
d) MAFIA
These are Reinforcement Learning Nptel Week 5 Assignment Answers
Question 6. Which of the following is the correct formula for Pointwise Mutual Information (PMI)?
a) PMI(wi,wj)=log2N⋅c(wi,wj)c(wi)2⋅c(wj)2PMI(w_i,w_j) = \log_2 \frac{N \cdot c(w_i,w_j)}{c(w_i)^2 \cdot c(w_j)^2}
b) PMI(wi,wj)=log2c(wi)⋅c(wj)N⋅c(wi,wj)PMI(w_i,w_j) = \log_2 \frac{c(w_i) \cdot c(w_j)}{N \cdot c(w_i,w_j)}
c) PMI(wi,wj)=log2N⋅c(wi,wj)c(wi)⋅c(wj)PMI(w_i,w_j) = \log_2 \frac{N \cdot c(w_i,w_j)}{c(w_i) \cdot c(w_j)}
d) PMI(wi,wj)=log2c(wi,wj)N⋅c(wi)⋅c(wj)PMI(w_i,w_j) = \log_2 \frac{c(w_i,w_j)}{N \cdot c(w_i) \cdot c(w_j)}
Question 7. ‘Useful fairness’ couples which of the following? (Select all that apply.)
a) Context awareness
b) Bias Score
c) STS task performance
d) Dataset diversity
Question 8. What is the key architectural idea behind the MAFIA model for effective debiasing?
a) Fusing bias-specific adapters while keeping the base model the same.
b) Replacing all model weights with debiased adapters.
c) Dynamically routing inputs based on detected bias type.
d) Using GANs to hallucinate fair outputs.
Question 9. Which of the following is true about proprietary models?
a) They are more neutral compared to CoDi and other open source models.
b) They are less neutral compared to CoDi and other open source models.
c) They have more bias than CoDi and other open source models.
d) They have the same neutrality as CoDi and other open source models.
Question 10. Which of the following are benchmark datasets commonly used to measure bias in language models? (Select all that apply.)
a) Stereoset
b) CrowS-Pairs
c) ImageNet
d) Bias-STS-S
e) SQuAD
Question 11. What is ‘gender-bleaching’ in the context of VLMs?
a) Improving the quality of input images.
b) Turning all people in the input images white.
c) Enhancing gender-specific features in input images.
d) Removing/Obscuring visual cues related to gender in input images.
Question 12. Why might a single adapter for all bias types (like iDEBall) fail?
a) Cannot effectively debias across all categories.
b) Trains slower than other models.
c) Requires more input data.
d) Does not understand contextual information.
These are Responsible and Safe AI Systems Nptel Week 5 Answers