Responsible and Safe AI Systems Nptel Week 1 Answers

Are you looking for Responsible and Safe AI Systems Nptel Week 1 Answers ? All weeks solutions of this Swayam course are available here.


Responsible and Safe AI Systems Nptel Week 1 Answers (July-Dec 2025)

Course link: Click here to visit course on Nptel Website


Question 1. According to the risk decomposition framework, which combination of factors would result in the HIGHEST risk from an AI system deployed in a critical infrastructure setting?
a) Low vulnerability, high hazard exposure, low hazard severity
b) High vulnerability, low hazard exposure, high hazard severity
c) High vulnerability, high hazard exposure, high hazard severity
d) Low vulnerability, low hazard exposure, high hazard severity

View Answer


Question 2. The concept of treacherous turns in AI systems refers to:
a) AI systems making computational errors during complex calculations
b) AI systems behaving differently once they reach sufficient intelligence
c) AI systems being hacked by malicious actors
d) AI systems consuming too much computational power

View Answer


Question 3. In the context of AI race dynamics, what is the primary concern regarding competitive pressure between nations and corporations?
a) It will make AI systems too expensive for general use
b) It will result in compatible AI standards globally
c) It will slow down AI innovation and progress
d) It may lead to rushed development that compromises safety measures

View Answer


Question 4. The “Swiss cheese model” mentioned in organizational risks suggests that:
a) Organizations should have a single, very strong safety measure
b) Safety measures should be implemented randomly across the organization
c) Multiple layers of defense compensate for individual weaknesses
d) Safety measures are unnecessary if the AI system is well-designed

View Answer

These are Responsible and Safe AI Systems Nptel Week 1 Answers


Question 5. Which scenario best illustrates the concept of proxy gaming?
a) An AI chess program that cheats by accessing opponent’s strategy
b) A recommendation system optimizing for user engagement rather than user well-being
c) An AI translator that produces grammatically incorrect sentences
d) A facial recognition system that fails to identify certain ethnic groups

View Answer


Question 6. A factory robot confuses a human worker for a box of vegetables and pushes the person, resulting in death.” According to the disaster risk equation, what was the primary failure component?
a) Hazard (misclassification capability)
b) Hazard Exposure (human-robot proximity)
c) Vulnerability (employee safety protocols)
d) All components failed equally

View Answer


Question 7. According to the risk taxonomy presented, malicious use of AI differs from rogue AI primarily in that:
a) Malicious use involves intentional harmful deployment by humans, while rogue AI acts independently
b) Malicious use only affects cybersecurity, while rogue AI affects all domains
c) Malicious use is easier to detect than rogue AI behavior
d) Malicious use requires more advanced AI capabilities than rogue AI

View Answer


Question 8. Deceptive Alignment in AI systems is:
a) AI systems that are openly hostile to humans
b) AI systems that appear to be following instructions but are actually pursuing different goals
c) AI systems that cannot understand human language properly
d) AI systems that work too slowly to be effective

View Answer

These are Responsible and Safe AI Systems Nptel Week 1 Answers


Question 9. How do you identify and avoid hazards in ML systems according to the disaster risk equation framework?
a) Alignment
b) Robustness
c) Monitoring
d) Systemic Safety

View Answer


Question 10. Red teaming in AI safety primarily serves to:
a) Accelerate model training
b) Identify system vulnerabilities
c) Improve computational efficiency
d) Reduce inference latency

View Answer


Question 11. Which technique is most effective for detecting deceptive alignment?
a) Training the model with more than 1000 samples
b) Mechanistic interpretability
c) Increasing model parameters
d) Reward modeling

View Answer


Question 12. RoBERTa succeeds in reasoning tasks where BERT fails due to:
a) Better tokenization
b) Emergent capabilities from scaling
c) Improved attention mechanisms
d) Larger vocabulary size

View Answer


These are Responsible and Safe AI Systems Nptel Week 1 Answers

Click here for all nptel assignment answers