AA: set of candidates (whom the public will vote for)
r: A \to \Nr:A→N: surjective rank mapping of candidate to their respective rank from 11 to kk
there can be duplicate rankings (so 1:..., 2:..., 2:..., 4:...1:...,2:...,2:...,4:...)
r(x)r(x) is the place of candidate xx
preference relation: xx is better than yy if xx's rank is lower than yy's
formally: x \varrho y \iff r(x) < r(y)xϱy⟺r(x)<r(y)
transitive and asymmetric
transitive: if x \varrho yxϱy and y \varrho zyϱz, then x \varrho zxϱz
asymmetric: x \varrho yxϱy and y \varrho xyϱx cannot hold simultaneously
alternative: x \varrho^* y \iff r(x) \leq r(y)xϱ∗y⟺r(x)≤r(y)
x \varrho y \iff \neg(y \varrho^* x)xϱy⟺¬(yϱ∗x)
I = \{1,...,n\}I={1,...,n}: set of voters, where each voter ii has a rank mapping r_iri (i.e. their own preferences)
P_APA: set of all possible relations on AA to be found through a rank mapping
formally: P_A = \{\varrho \subset A \times A \; | \; \varrhoPA={ϱ⊂A×A∣ϱ is generated by a rank mapping r\}r}
Collective Choice Functions
collective choice function: K : P_A \times ... \times P_A \to P_AK:PA×...×PA→PA (result same candidates as input candidates)
should adhere to democratic basic rules
KK must be total on P^n_APAn (individual freedom of decision making, any \varrho_iϱi allowed)
result of KK must always lie in P_APA (result in same set as choices; holds in two-party systems)
Pareto condition: supremacy of collective; if all voters prefer xx, it can't be that yy wins
independence of irrelevant alternatives; ballots with identical ranking w.r.t xx and yy must yield same collective ranking w.r.t xx and yy
exclusion of a dictator; no voter can always prevail
majority decision (Condorcet method): x \varrho y \iff N(x,y) > N(y,x)xϱy⟺N(x,y)>N(y,x) with N(x,y)N(x,y) as number of voters with x \varrho_i yxϱiy
can lead to voting paradox (x \varrho y, y \varrho z, z \varrho xxϱy,yϱz,zϱx, violates transitivity), violates rule 2
rank addition: sum up rank mappings of individual preferences
formally: x \varrho y \iff \sum_{i = 1}^n r_i(x) < \sum_{i = 1}^n r_i(y)xϱy⟺∑i=1nri(x)<∑i=1nri(y)
violates rule 4
(!) Arrow's impossibility theorem: in the case of more than two candidates and more than one voter, there cannot exist a collective choice function that satisfies all five democratic basic rules
Scheduling
A_1, ..., A_nA1,...,An: tasks / jobs
M_1,...,M_mM1,...,Mm: machines
t_i^{(j)}ti(j): execution time for task A_iAi on machine M_jMj
Process Scheduling (DAG)
s_isi: start time of task ii
t_iti: execution time of task ii
c_i = s_i + t_ici=si+ti: completion time of task ii
A_i \to A_jAi→Aj: precedence condition (task A_jAj can only be started after A_iAi is finished)
schedule: function, assigns start time s_isi to every task ii
admissible schedule: all precedence conditions are fulfilled
Job-Shop Problems
job-shop model: a job is broken down into subjobs, each with their own execution times, requiring a machine, where each machine can only process one subjob at a time with no recirculation (i.e. every job requires every machine at most once)
open shop model: arbitrary permutability of a tasks subjobs between respective machines
flow shop model: job-shop but all subjobs pass through the machines in the same order
Types of Algorithms
Online Algorithm: processes input in a serialized way, i.e. the entire input does not need to be available from the start (e.g. insertion sort)
Offline Algorithm: must have the whole input data available from the beginning (e.g. selection sort)
Greedy Algorithm: always makes the locally optimal choice at each stage (e.g. Dijkstra’s algorithm)
Approximation Algorithm: computes a solution in polynomial time to (typically NP-hard) optimization problems and guarantees that the solution is within a certain factor of the optimal solution
Population Dynamics
time \to→ordinary differential equations
time + space \to→partial differential equations
Model of Malthus (1 Species)
\gammaγ: constant birth rate per time unit and per individual
\deltaδ: constant death rate per time unit and per interval
ODEs: \begin{pmatrix}\dot x(t) \\ \dot y(t)\end{pmatrix} = \begin{pmatrix}-m & a \\ b & -n\end{pmatrix} \begin{pmatrix}x(t) \\y(t)\end{pmatrix} + \begin{pmatrix} c \\ d\end{pmatrix}(x˙(t)y˙(t))=(−mba−n)(x(t)y(t))+(cd)
Equilibriums
attractiveness of equilibrium: negative real parts of the eigenvalues of the Jacobian of \begin{pmatrix}f(p,q) \cdot p \\ g(p,q) \cdot q\end{pmatrix}(f(p,q)⋅pg(p,q)⋅q)
competition: both species have the same characteristics and compete for the same habitat (e.g. lions vs leopards)
equilibrium ensured if \frac{a_2}{a_5} > \frac{a_1}{a_4} > \frac{a_3}{a_6}a5a2>a4a1>a6a3
predator-prey: one species is the natural predator of another (e.g. foxes vs. rabbits)
f_p(p,q), g_p(p,q), g_q(p,q) < 0, \; \; f_q(p,q) > 0, \; \; p, q > 0fp(p,q),gp(p,q),gq(p,q)<0,fq(p,q)>0,p,q>0 (i.e. predator PP w/ ff benefits from high population of prey QQ w/ qq)
linear model: f(p,q) = a_1 + a_2 \cdot p + a_3 \cdot q, \; \; g(p,q) = a_4 + a_5 \cdot p + a_6 \cdot qf(p,q)=a1+a2⋅p+a3⋅q,g(p,q)=a4+a5⋅p+a6⋅q
a_2,a_5,a_6 < 0\\a_3 > 0a2,a5,a6<0a3>0 (for predator PP and prey QQ, can vary depending on who's who)
symboisis: both benefit from eachother in a community (e.g. parasite vs. host)
Ordinary Differential Equations (ODEs)
see NumProg script (machine numbers, rounding and rounding errors, condition, stability, stiffness, explicit / implicit euler, heun, runge-kutta, local / global discretization error, order of convergence, ill-conditioned and well-conditioned problems)
initial value problem: value at the beginning of the considered time interval is given
Neumann boundary conditions: values of first derivative \dot y(t)y˙(t) are given
Control Engineering, Fuzzy Logic
XX: crisp set (i.e. a mathematical set in the typical sense)
\tilde AA~: fuzzy set over XX, characterized by a membership function
\mu(\cdot, X, \tilde A) : X \to [0,1]μ(⋅,X,A~):X→[0,1]: membership function, stating the degree of membership for each element xx to which it belongs to the corresponding fuzzy set
\text{supp}(\tilde A) = \{x \in X: \mu(x,X,\tilde A) > 0 \}supp(A~)={x∈X:μ(x,X,A~)>0}: support, all elements with positive membership degree
\tilde A_\alpha = \{(x, \mu(x,X,\tilde A): x\in X \land \mu(x,X,\tilde A) \geq \alpha\}A~α={(x,μ(x,X,A~):x∈X∧μ(x,X,A~)≥α}: \alphaα-level-set / -cut, all elements with degree above a certain threshold
\tilde A \uparrow \alphaA~↑α: cut fuzzy set, "cut" / limit all degrees higher than \alphaα down to \alphaα
Linguistic Variables
linguistic variable: namevv and possible values (linguistic terms), where each linguistic variable has an associated crisp setXX and each linguistic term is a fuzzy set defined over the crisp set XX (i.e. each term has a respective membership function)
example:
linguistic variable: "TEMPERATURE"
assigned crisp set: real axis (temperature values)
\varrho(T) = \frac{\sigma(T)}{E(T)}ϱ(T)=E(T)σ(T): coefficient of variation (relative)
p(A | B) = \frac{p(A \cap B)}{p(B)}p(A∣B)=p(B)p(A∩B): conditional probability
p(T \leq t + \Delta t \; | \; T > t) = \frac{F_T(t + \Delta t) - F_T(t)}{1-F_T(t)}p(T≤t+Δt∣T>t)=1−FT(t)FT(t+Δt)−FT(t): conditional probability
h_T(t) = \frac{f_T(t)}{1-F_T(t)}hT(t)=1−FT(t)fT(t): hazard rate; risk of interval end in time (e.g. a customer arrives in the next \Delta tΔt seconds)
negative exponential distribution: predicts time interval between two event occurences
Poisson distribution: predicts number of event occurences in an interval (memoryless, i.e. independent of history)
ϑϑ: event (hazard) rate, constant
h_T(t) = ϑ = E(\frac{\text{\# of events in }[t_1,t_2]}{t_2-t_1})hT(t)=ϑ=E(t2−t1# of events in [t1,t2])
queuing system (QS): system in which jobs can't get lost (i.e. if service unit is occupied, job waits in canals)
elementary queuing system (eIQS)
one canal as waiting pool with capacity k_{WP}kWP
one service unit with capacity k_UkUork_UkU simple service units
inter-arrival time: timespan between two consecutive arrival events
D: deterministic / constant
M: Markovian; negative exponentially distributed w/ distribution 1-e^{-\lambda t}1−e−λt and expected value \frac 1 \lambdaλ1
G: general, arbitrarily distributed
Kendall notation: \text{arrival process | service process | } n=k_{SU} \text{ | } k_{WP} \text{ | } N \text{ | } Darrival process | service process | n=kSU | kWP | N | D
M|M|1M∣M∣1: arrival and service process negative exponentially distributed, one service unit
E(F) = \frac r {1-r}E(F)=1−rr
E(D) = \lambdaE(D)=λ
E(B) = \frac1\muE(B)=μ1
E(Y) = \frac{E(B)}{1-r}E(Y)=1−rE(B)
E(W) = \frac r {1-r} \cdot E(B)E(W)=1−rr⋅E(B)
M|G|1M∣G∣1: arrival process negative exponentially distributed, service process arbitrarily distributed, one service unit
task stream: arrival process + type of task (e.g. (M, \text{type } A)(M,type A))
task load: set of task streams
\nu_i = \frac{E(D_i)}{E(D_S)}νi=E(DS)E(Di): number of visits of vertex ii (ratio between throughput at ii and throughput of network)
traffic bottleneck: vertex / vertices with maximum utilization E(R_i)E(Ri) (max. 1)
stochastic process: random variable XX depending on time tt
continuous process: X(t) \in \RX(t)∈R
discrete process: X(t) \in \N, \ZX(t)∈N,Z
process in continuous time: t \in \Rt∈R
process in discrete time: tt countable
state space: set of possible values of XX
stationary process: distribution stays the same in time, E(X(t)) = E(X)E(X(t))=E(X) (as such, no absorbing states)
independent process: not depending on former results, p(X(t_2) \leq x_2 \; | \; X(t_1) = x_1) = p(X(t_2) \leq x_2)p(X(t2)≤x2∣X(t1)=x1)=p(X(t2)≤x2)
Markov process: new state depends only on current state and not previous history
homogeneous Markov process (HMP): transition probabilities are constant, p(X(t_{n+1}) \leq x_{n+1} \; | \; X(t_n) = x_n)p(X(tn+1)≤xn+1∣X(tn)=xn) (here discrete)
irreducible: all states are reachable from each other
all states are either transient or positively recurrent or null recurrent
if one state is periodic, all states are with same period length
closed subset of states: impossible to escape in complementary set
absorbing state: closed subset consisting of only one state
recurrence probability: f_i = p(X(t_{n+k}) = i, k \geq 1 \; | \; X(t_n) = i)fi=p(X(tn+k)=i,k≥1∣X(tn)=i), probability that sometime in the future, we revisit node ii given we're currently in ii
recurrence time kk or KK
recurrent state: f_i = 1fi=1 (i.e. we'll surely come back)
periodic: k \in \{j \cdot k_0 \; | \; j, k_0 \in \N\}k∈{j⋅k0∣j,k0∈N} (i.e. happens every kk units of time)
positively recurrent: f_i = 1, E(K) < \inftyfi=1,E(K)<∞ (you'll be back in finite time)
null recurrent: f_i = 1, E(K) = \inftyfi=1,E(K)=∞ (you would have to wait an infinite amount of time)
transient: f_1 < 1f1<1 (uncertain recurrence)
state probabilities: p_i(t_k) = \sum_{j=1}^n p_j(t_{k-1}) \cdot p_{ji}pi(tk)=∑j=1npj(tk−1)⋅pji (sum of probability of state jj in last time step times transition probability from jj to ii) with \sum_{i=1}^n p_i(t_k) = 1∑i=1npi(tk)=1