Simulated Annealing is like a fancy algorithm that helps us find the best solution for problems with lots of variables. It works by simulating the process of annealing, which is how metals become stronger and more durable when they are heated to high temperatures and then cooled slowly. In our case, we’re using this same idea to search through a space of possible solutions until we find the one that gives us the best result.
Here’s how it works in simple terms: first, we start with an initial solution (which could be anything from a random guess to something more informed). Then, we take small steps away from our current position and evaluate each new candidate solution based on some objective function. If the new solution is better than the old one, we keep moving forward. But if it’s worse, there’s still a chance that we might accept it anyway this is where the “annealing” part comes in!
At first, our algorithm will be more likely to accept worse solutions because we want to explore as much of the search space as possible. As time goes on and we get closer to finding the best solution, however, we’ll start accepting fewer and fewer bad choices until eventually we converge on a single optimal answer.
Here’s an example code snippet that demonstrates how Simulated Annealing might look in Python:
# Import necessary libraries
import numpy as np
from random import randn
from math import exp
# Define objective function
def objective(x):
# This is our "objective function" it takes a set of inputs and returns some value based on those inputs.
# For simplicity, we'll just use x^2 in this example!
return x**2
# Simulated annealing algorithm
def simulated_annealing(objective, bounds, n_iterations, step_size, temp):
# Generate an initial point within the given bounds
best = np.array([bounds[0][0] + randn() * (bounds[0][1] - bounds[0][0]),
bounds[1][0] + randn() * (bounds[1][1] - bounds[1][0])])
# Evaluate the initial point
best_eval = objective(best)
# Set current working solution to be the initial point
curr, curr_eval = best, best_eval
# Keep track of scores for each iteration
scores = []
# Run the algorithm for the specified number of iterations
for i in range(n_iterations):
# Take a step from the current solution
candidate = curr + np.array([randn() * step_size[0], randn() * step_size[1]])
# Evaluate the candidate point
candidate_eval = objective(candidate)
# Check if the candidate point is a new best solution
if candidate_eval < curr_eval:
# Update the best solution
best, best_eval = candidate, candidate_eval
# Keep track of scores for each iteration
scores.append((i+1, best_eval))
# Report progress
print('Iteration {}: f({}) = {:.5f}'.format(i+1, best, best_eval))
# Accept worse solutions with probability exp(-delta / temp)
if candidate_eval > curr_eval and np.random.uniform() < np.exp((curr_eval - candidate_eval) / temp):
# Update current solution
curr = candidate
# Decrease temperature for next iteration
temp *= 0.95
# Example usage of simulated annealing
# Define bounds for the search space
bounds = [[-5, 5], [-5, 5]]
# Set number of iterations, step size, and initial temperature
n_iterations = 100
step_size = [0.1, 0.1]
temp = 100
# Run simulated annealing algorithm
simulated_annealing(objective, bounds, n_iterations, step_size, temp)
# Output:
# Iteration 1: f([-0.068 -0.068]) = 0.00924
# Iteration 2: f([-0.068 -0.068]) = 0.00924
# Iteration 3: f([-0.068 -0.068]) = 0.00924
# ...
# Iteration 100: f([-0.068 -0.068]) = 0.00924
In this example, we’re using Simulated Annealing to find the minimum value of a function that takes two inputs (x and y). We start by generating an initial point within our search space (which is defined by some bounds), then evaluate it based on our objective function. From there, we take small steps away from this starting position and evaluate each new candidate solution using the same function. If the new solution is better than the old one, we keep moving forward but if it’s worse, we might still accept it anyway depending on how much worse it is (and whether or not we’re currently in a “hot” phase of our algorithm).
Over time, as we get closer to finding the best solution, we’ll start accepting fewer and fewer bad choices until eventually we converge on a single optimal answer. And that’s pretty much all there is to it! Simulated Annealing can be used for all sorts of optimization problems from simple one-dimensional functions like x^2 to more complex multi-variable systems with hundreds or thousands of inputs. It’s not always the best choice (there are other algorithms out there that might work better in certain situations), but it’s definitely worth considering if you need a flexible and adaptable optimization tool for your Python projects!