NO

Network Optimization

Perquisites:


Contents and Program:

  • Introduction
    • Nothing to report

Index - All Notes:


##### All Notes 1. ***[[NO - Chapter 1 (Modified). A quick introduction to Mathematical Programming I|A quick introduction to Mathematical Programming I]]*** ([[NO - Summary of Chapter 1|Summary]]) - [[NO - Mathematical Program]] - [[NO - Feasible and Optimal Solution]] - [[NO - Local and Global Optima]] - [[NO - Relaxation & Restriction of an MP]] - [[NO - Convex Program]]
2. ***[[NO - Chapter 2 (Modified). A quick introduction to Mathematical Programming II|A quick introduction to Mathematical Programming II]]*** ([[NO - Summary of Chapter 2|Summary]]) - [[NO - Linear Program]] - [[NO - Standard Form of a Linear Program]] - [[NO - Feasible Solutions for LP (Linear Programs)]] - [[NO - Optimal Solution for LP (Linear Programs)]] - [[NO - Geometrical Description of LP (Linear Programs)]] - [[NO - Classification of LP (Linear Programs)]] - [[NO - Extreme Points and Vertices]] - [[NO - Basis and Basic Variables]] - [[NO - BFS (Basic Feasible Solutions)]] - [[NO - Degenerate BFS (Basic Feasible Solutions)]] - [[NO - Find the Optimal Solution in a LP (Linear Program)]]
3. ***[[NO - Chapter 3 (Modified). How to solve a Linear Program|How to solve a Linear Program]]*** ([[NO - Summary of Chapter 3|Summary]]) - [[NO - Reduced Costs and Descent Direction]] - [[NO - Pivot Operation]] - [[NO - Simplex Method and Auxiliary Problem]] - [[NO - Approaching a LP (Linear Program)]]
4. ***[[NO - Chapter 4. A quick introduction to Complexity|A quick introduction to Complexity]]*** $\mathcal{P}$ problems (**polynomial time**): solved in $O(1)$, $O(n)$, $O(n^2)$, $\ldots$ $\mathcal{NP}$ problems (**non-polynomial time**): solved in $O(2^n)$, $O(n!)$, $O(n^n)$, $\ldots$ Pure **LP** are $\mathcal{P}$ problems, so they can be solved in **polynomial time** (with the *ellipsoid method*) tho in practice the [[NO - Simplex Method and Auxiliary Problem|simplex method]] $O(n!)$ is more efficient. - [[NO - P (Polynomial) and NP (Non-deterministic Polynomial) Problems]]
5. ***[[NO - Chapter 5. A quick introduction to Integer Programming|A quick introduction to Integer Programming]]*** **IP** are a [[NO - Relaxation & Restriction of an MP|restriction]] of **LP** where there is a new constraint where all variables are **integers** ($x \in \mathbb{N}$), so we cannot use the previous seen methods to solve it (there are no more vertices). An IP can be **Pure** ($x \in \mathbb{N}$), **BIP** (Binary IP, $x \in \{0 ,\ 1 \}$), and **MIP** (Mixed IP, some $x \in \mathbb{N}$ other $x \in \mathbb{R}$) A few IP have all the vertices as integers (especially graph-related problems), to easily see if the **feasible region** of an IP is an **integer polyhedron** (all vertices of the polyhedron are integer), we can see if the $A$ matrix is a **TUM** (Total Unimodular Matrix). - [[NO - IP (Integer Programs)]] - [[NO - TUM (Total Unimodular Matrix)]]
6. ***[[NO - Chapter 6. How to solve a MIP (Mixed Integer Program)|How to solve a MIP (Mixed Integer Program)]]*** Here is a collection of **building blocks** or ideas used to solve a **MIP**, remember that these ideas, do no lead to a stand alone algorithm. Rather they are mixed and adopted in **branch&cut**, **column generations** and more complex algorithmic schemes. The **Convex Hull** is the **integer polyhedron** (all **integer vertices**) containing all the **feasible integer solutions**. We can **Cut** the **feasible region** adding a single constraint at a time reducing the **LP** related solution space, but always leaving the entire convex hull intact. We can create smaller sub-problems (**children**) for a given **parent problem**, these problems are easier to solve, and once we solve all of them we will have that the **optimal solution** of the **children** is the same as the **parent**, for integer problems for example we can fix one variable value and create sub-problems for all possible value that variable can assume. The inverse of branching is **Bounding** where we find the solution to a relaxed version of the problelm, setting an **UP** Upper Bound to the solution, for integer problems for example we can create a dual **LP** where the constraint of "**integrity**" is void. - [[NO - Convex Hull]] - [[NO - Cutting Plane]] - [[NO - Branching]] - [[NO - Bounding]]
7. ***[[NO - Chapter 7. A quick introduction to Graphs|A quick introduction to Graphs]]*** ##### [[NO - Graph Components Cheat-Sheet|Graph Components Cheat-Sheet]] #TODO - [[NO - Graph and its components]] (networks, arcs/edges, nodes, network information, multiarcs/loop, path, cycle) - [[NO - Connected Graph]] - [[NO - Tree Graph and Forest Graph]] - [[NO - Bipartite Graph]] - [[NO - Cutting a Graph and s-t Cut]] - [[NO - Incidence Matrix]] - [[NO - Adjacency Matrix]] - [[NO - Incidence List]]
8. ***[[NO - Chapter 8. Max Flow & Min Cut|Max Flow & Min Cut]]*** Max flow and Min Cut problems are a dual-pair problem, that is the minimum capacity of the network is equal to its maximum flow, the Ford-Fulkerson Algorithm is one to find the max flow (and so also the minimum cut), for a visual explanation watch this [video](https://www.youtube.com/watch?v=Tl90tNtKvxs). > **NOTE**: > For **Maximum Flow** we mean the maximum possible flow from the **source node** to the **sink node**. > For **Minimum Cut** we mean the minimum flow across all possible **cut** of the network. - [[NO - Max Flow & Min Cut Problems]] - [[NO - Ford-Fulkerson Algorithm]]
9. ***[[NO - Chapter 9 (Modified). Minimum Spanning Tree|Minimum Spanning Tree]]*** ([[NO - Summary of Chapter 9|Summary]])
  1. Matching (Summary)
  1. TSP (Travelling Salesman Problem) Elerian cycle: cycle that passes once over all edges of the graph. Hamiltonian cycle: cycle that passes once over all nodes of the graph. Triangle inequality: a property a graph can have, that states that in the graph there are no shortcuts. Complete Graph: a graph where every node is connected to all other nodes, also a weighted graph can always be considered completed if we put an infinite cost on the missing arcs.
  1. Classification of algorithms Exact Algorithms: search for the exact solution, for example the mathematica formulation, and dynamic programming. Approximate Algorithms: use properties of the problem to calculate an approximation ration. Heuristic Algorithms: the more time we give the algorithm the better the result. Greedy Heuristic Algorithms: build a decision at a time, without changing the previous decisions, divide the problem in smaller subproblems that iteratevely find a solution (it might not be the optimal one). Metaheuristic Algorithms: exploration vs. exploitation algorithms, some example are: the local search based algorithm and the population based algorithm (genetic algorithm)
  1. NO - Chapter 13 (Modified). Construction heuristics for TSP
  1. A quick introduction to Local Search Given an initial feasible solution we explore its neighbours in the feasible solutions space and we search for a better solution. We explore the neighbourhood of a feasible solution altering the problem variables and see if the new result it’s better then the previous.
  1. Local Search for TSP For a TSP we can first find a feasible solution, (for example with the Christofides’s algorithm) then we search for 2 (or more generally arcs) arcs in the solution such that if they are exchanged (arcs , becomes: arcs , ) the solution improves.
  1. Mathematical Formulation of the TSP (Summary)
  1. A quick introduction to Duality

NOTE: The professor in the slides defined the result of a Relaxation as a LB (Lower Bound), this is true as long as the original problem is a minimization problem, the feasible solution space of a relaxation is bigger than that of the original problem, so there is more solution to choose from and in the minimization problem we can find a better solution which will be LOWER than the optimal solution of the original problem.

  1. A quick introduction to Lagrangian Relaxation

NOTE: The professor in the slides defined the result of a Relaxation as a LB (Lower Bound), this is true as long as the original problem is a minimization problem, the feasible solution space of a relaxation is bigger than that of the original problem, so there is more solution to choose from and in the minimization problem we can find a better solution which will be LOWER than the optimal solution of the original problem.

  1. Lower Bounds for TSP
  1. A quick introduction to Branch&Bound
  1. Branch & Bounds for TSP
  1. A quick introduction to Cutting Plane
  1. Cutting plane for TSP
  1. A quick introduction to Branch&Cut
  1. Brunch&Cut for TSP
All Original Slides

Python Scripts:

  • Train Scheduling Problem

Transclude of NO_Project_Exercise_3_v4_GUROBI.ipynb
Transclude of no_project_exercise_3_v4_gurobi.py

  • Train Scheduling Problem (Fake Data, no need to access Google to solve it)

Transclude of NO_Project_Exercise_3_v5_GUROBI.ipynb
Transclude of no_project_exercise_3_v5_gurobi.py

All My Notes

For the best experience in reading these and all other notes, and also if you wish to EDIT them, do as follows:

  1. Install Obsidian, or another markdown editor.
  2. Go to the Github link of this or another note
  3. Download all the repo or if you know git just the ‘content/’ folder
  4. Extract just the ‘content/’ folder from the repo zip file
  5. Open Obsidian >> Menage Vaults >> Open Folder as Vault >> and select the ‘content/’ folder you just extracted

PLEASE NOTE:

  • These notes were not revised by the professors, so take all of them with a grain of salt.
  • However if you download them since they are made in markdown you can EDIT them, please do so.
  • If you edit and “upgrade” them, please pass the new ones to the other students and professors.

Here are all the links to my notes: