# Local Search

Bart Massey

## Completeness Is Overrated

• Guaranteed to give a solution when one exists, good

• On large instances, will run too long to wait, bad

• Closed list will use too much memory to be feasible

## Optimization Problems

• Instead of finding a solution, try to find a "best" solution based on some kind of score

• Can be used for solution if "perfect" score is found

• DFS not great for optimization: consecutive solutions explored vary only in last few variables

• Wanted: "anytime" algorithm that gives better results the longer it runs

• Willing to sacrifice completeness to get better actual performance and simplicity

## Local Search

• Idea: Look only at total solutions

• Try to adjust a solution to improve its score

• Keep adjusting until a best score is found

• Kelly-Bootle's "Algorithm For Maximizing Human Happiness" again

``````  while human happiness is not maximal
increase human happiness
``````

## State Space For Local Search

• Now the states are total

• The neighbors are states reachable by some specified small "local" change

• No special heuristic is required: can just follow scores

## Example: Euclidean TSP 2-OPT

• http://en.wikipedia.org/wiki/2-opt

• Find some tour of the cities: it may be arbitrarily bad

• Repeatedly pick two crossing edges and swap their endpoints

• When all crossings have been removed the tour is guaranteed to have been improved

## Local Optima

• 2-OPT will not necessarily find a best tour

• It gets "stuck" when it runs out of improving swaps

• This is a specific instance of a general problem of local search

## Restarts

• When reaching a local optimum, can just start over

• When out of time, take the best state seen so far

• Plateaus can be a problem: how often to restart?

## Example: Sudoku

• Start by filling in the blank squares with the numbers 1..9 such that there are 9 copies of each digit in the resulting puzzle

• Solution score is number of constraint violations: smaller is better

• Swap two (free) digits that would produce fewer or equal constraint violations

• Probably take the best swap at each step

• Continue until restart or solution

## "Noise" Moves

• Another way to avoid getting stuck in a local minimum is to try to climb out of it by allowing some moves that are disimproving "noise" moves

• Example: with probability 0.5 swap two random Sudoku digits

• Example: when no improving moves are available, with probability 0.5 make a random "least-disimproving" swap

• Always combined with restarts

## "Simulated Annealing"

• Decrease noise probability as a function of number of moves taken according to some "annealing schedule"

• Inspired by cooling of metals or something, except it turns out they don't actually work that way

• Counterintuitive: Take noise moves before getting stuck

• Doesn't work well in practice in my experience

## Local Search Quality

• Best Euclidean TSP solutions known for most problems are from 2-OPT + 3-OPT local search

• Largest SAT instances solved are from local search in space of number of constraint violations

• Lack of provable optimality can be annoying or a real problem depending on application