An Introduction to Artificial Intelligence | Week 4

Session: JAN-APR 2024

Course name: An Introduction to Artificial Intelligence

Course Link: Click Here

For answers or latest updates join our telegram channel: Click here to join

These are An Introduction to Artificial Intelligence Answers Week 4


Q1. Which of the following is (are) drawback(s) of Hill Climbing?
Global Maxima
Local Maxima
Diagonal Ridges
Plateaus

Answer: B, C, D


Q2. Let x be the expected number of restarts (first try is not included in the number of restarts) in Hill Climbing with Random Restarts Algorithm, if the probability of success, p = 0.23. Let y be the expected number of steps taken to return a solution, given, it takes 4 steps when it succeeds and 3 steps when it fails. What is 3x+y (return the nearest integer)?

Answer: 24


These are An Introduction to Artificial Intelligence Answers Week 4


Q3. Select the INCORRECT statements –
Local beam search (with k nodes in memory) is the same as k random-start searches in parallel.
Simulated annealing with temperature T = 0 behaves identically to greedy hill-climbing search
Enforced Hill Climbing performs a depth-first search from a local minima.
In Tabu Search, we never make a currently tabu’ed step.

Answer: A, B, C


Q4. Select the CORRECT statements –
Genetic Algorithm has the effect of “jumping” to completely new parts of search-space, and making “non-local” moves.
As the size of the tabu list increases to infinity, tabu search reduces to a systematic search.
Greedy Hill Climbing with Random Restarts is asymptotically complete, whereas Random Walk is not.
If the initial temperature in Simulated Annealing is set too small, the search can get stuck at a local optimum.

Answer: A, B, D


These are An Introduction to Artificial Intelligence Answers Week 4


Q5. We define First-Choice Hill Climbing (FCHC) as a stochastic hill-climbing algorithm that generates neighbours randomly until one is found better than the current state. When this happens, the algorithm moves to this new state and repeats.
Select the CORRECT statements:

FCHC is similar to Simulated Annealing for large values of T.
FCHC will always return the same solution as Greedy Hill Climbing as we always take a step in the direction of increasing slope.
FCHC will perform better than Greedy Hill Climbing when each state has a large number of neighbours.
FCHC combined with Tabu List does not suffer from local maximas/minimas.

Answer: FCHC will perform better than Greedy Hill Climbing when each state has a large number of neighbours.


Q6. Consider the Hill Climbing Search algorithm for the N-Queens problem with N = 4. The image represents the start state. We want to reach a state i.e. configuration of the board with 4 queens such that no two queens attack each other. The objective function we consider is the number of pairs of queens that attack each other and we want to minimise this objective function. The successor function we consider is moving a single queen along its column by one square either directly up or directly down.

See also  An Introduction to Artificial Intelligence | Week 12

Let the objective function for the start state = x , the number of neighbours of the start state = y, the objective function of the neighbour of the start state with the lowest objective function = z, then what is the value of 2x + y + 3z ?

Answer: 21


For answers or latest updates join our telegram channel: Click here to join

These are An Introduction to Artificial Intelligence Answers Week 4


Q7. Consider the same setup as question 6, we apply the hill climbing algorithm to minimise the objective function. The hill climbing algorithm stops when the objective function becomes 0 i.e. no two queens attack each other. To break ties b/w two neighbours with the same objective function pick the neighbour obtained by moving the queen in the lower column number (a < b < c < d) and if a tie still exists pick the neighbour obtained by moving the queen downward. The number of steps required by the hill climbing algorithm is:

Answer: 2


Q8. Assume that we have a function y = (x – 3)4 , starting at x = 4, which of the following values of the step size λ will allow gradient descent to converge to the global minimum?
0.005
0.25
0.5
0.75

Answer: A, B = 0.005, 0.25


For answers or latest updates join our telegram channel: Click here to join

These are An Introduction to Artificial Intelligence Answers Week 4


Q9. Consider a state space having 3 states: s1, s2 and s3. The value of each state is V(s1) = 0, V(s2) = 4, V(s3) = 2. There can be transitions from s1 to s2, s2 to s1 and s3, and s3 to s2. Starting at s1, what is the probability that we end up back at s1 after 2 steps of simulated annealing? Assume that we follow a temperature schedule of [10, 5, 1]. Next state is chosen uniformly at random whenever there are multiple possibilities.
Round answer to 3 digits after decimal point (eg, if the answer is 0.1346, return 0.135).

Answer: 0.225


These are An Introduction to Artificial Intelligence Answers Week 4


Q10. Consider the 1-D state space shown by the image below. For which of the following start state regions using the greedy local search hill-climbing algorithm will we not reach the global maximum ?
A
B
C
D
E
F

Answer: A, B, E, F


For answers or latest updates join our telegram channel: Click here to join

These are An Introduction to Artificial Intelligence Answers Week 4

More Weeks of An Introduction to Artificial Intelligence: Click here

More Nptel Courses: https://progiez.com/nptel-assignment-answers


Course Name: An Introduction to Artificial Intelligence

Course Link: Click Here

These are An Introduction to Artificial Intelligence Answers Week 4


Q1. We want to sort an array of n distinct integers using local search. The start state is a random permutation of the integers. All neighbors of a state are those permutations that can be achieved by swapping one pair of different numbers. For n=7, which of the following are INCORRECT options?
a. Number of reachable states from the start state is 77 .
b. The number of reachable states from the start state depends on the randomly sampled initial state.
c. Every state has 42 neighbors.
d. For S = {i|a_i < a_i+1}, minimizing |S| leads to the desired goal state.

Answer: a, b, c, d

See also  An Introduction to Artificial Intelligence | Week 1

These are An Introduction to Artificial Intelligence Answers Week 4


Q2. Which of the following are correct statements?
a. Local search through random sampling is not asymptotically complete because it takes a lot of steps.
b. Random walk with restarts is asymptotically complete
c. Hill climbing is not asymptotically complete because it can get stuck in plateaus/local optima.
d. Hill climbing with sideways moves is asymptotically complete.

Answer: b, c


These are An Introduction to Artificial Intelligence Answers Week 4


Q3. Which of the following statement is correct about the temperature parameter in simulated annealing?
a. If the initial temperature is set too small, the search can get stuck at a local optimum.
b. The tendency to remain at a local optimum increases with an increase in temperature.
c. It is increased over time to provide stability to the search process.
d. It is increased over time to accelerate exploration.

Answer: a. If the initial temperature is set too small, the search can get stuck at a local optimum.


These are An Introduction to Artificial Intelligence Answers Week 4


Q4. Identify all differences between Simulated Annealing (SA) and Genetic Algorithms (GA)
a. GA maintains multiple candidate solutions while SA does not.
b. GA provides stronger guarantees about convergence to the global optimum than SA
c. SA has no parameters to set whereas GA requires you to set multiple parameters such as crossover rate
d. GA will always converge to an optimal solution faster than SA on any given problem.

Answer: a. GA maintains multiple candidate solutions while SA does not.


These are An Introduction to Artificial Intelligence Answers Week 4


Q5. Which of the following optimization objectives will convert a constraint satisfaction problem to an equivalent optimization problem?
a. Minimize the number of satisfied constraints
b. Maximize the number of satisfied constraints
c. Maximize R where R is 0 when all constraints are satisfied and 1 otherwise
d. Maximize R where R is 1 when all constraints are satisfied and 0 otherwise

Answer: b, d


These are An Introduction to Artificial Intelligence Answers Week 4


Q6. Which of the following statements are correct about Local Beam Search?
a. It consumes more memory than greedy hill-climbing search regardless of beam size
b. It can never find the optimal solution with a beam size of k if it does not find the optimal solution with a beam size > k for a single run
c. It may find the optimal solution with a beam size of k even if it does not find the optimal solution with a beam size > k for a single run
d. Local Beam Search with a beam size of k is equivalent to performing k random walk searches with random starts in parallel.

See also  An Introduction to Artificial Intelligence | Week 2

Answer: c. It may find the optimal solution with a beam size of k even if it does not find the optimal solution with a beam size > k for a single run


These are An Introduction to Artificial Intelligence Answers Week 4


Q7. Consider a state space having 3 states: s1, s2 and s3. The value of each state is V(s1) = 0, V(s2) = 4, V(s3) = 2. There can be transitions from s1 to s2, s2 to s1 and s3, and s3 to s2. Starting at s1, what is the probability that we end up back at s1 after 2 steps of simulated annealing? Assume that we follow a temperature schedule of [10, 5, 1].Next state is chosen uniformly at random whenever there are multiple possibilities.
Round answer to 3 digits after decimal point.

Answer: 0.225


These are An Introduction to Artificial Intelligence Answers Week 4


Q8. In the previous question, what is the probability that simulated annealing terminates at the state with highest value? Assume that the run terminates after 3 steps.
Round answer to 3 digits after decimal point.

Answer: 0.966


These are An Introduction to Artificial Intelligence Answers Week 4


Q9. Which of the following statements are correct about Tabu Search?
a. It never visits the same state more than once.
b. It never visits a state that is currently in the tabu list.
c. As the size of the tabu list increases to infinity, tabu search reduces to a systematic search.
d. It is guaranteed to escape any local optimum it is stuck in.

Answer: a, c


These are An Introduction to Artificial Intelligence Answers Week 4


Q10. Assume that we have a function y = (x – 1)4. Starting at x = 2, which of the following values of the step size λ will allow gradient descent to converge to the global minimum?
a. 0.05
b. 0.2
c. 0.5
d. 0.75

Answer: a, b


These are An Introduction to Artificial Intelligence Answers Week 4


These are An Introduction to Artificial Intelligence Answers Week 4

More Solutions of An Introduction to Artificial Intelligence: Click Here

More NPTEL Solutions: https://progiez.com/nptel-assignment-answers/


These are An Introduction to Artificial Intelligence Answers Week 4