$\require{mathtools} % %%% GENERIC MATH %%% % % Environments \newcommand{\al}[1]{\begin{align}#1\end{align}} % need this for \tag{} to work \renewcommand{\r}{\mathrm} % % Greek \newcommand{\eps}{\epsilon} \newcommand{\veps}{\varepsilon} \newcommand{\Om}{\Omega} \newcommand{\om}{\omega} \newcommand{\Th}{\Theta} \let\fi\phi % because it looks like an f \let\phi\varphi % because it looks like a p % % Miscellaneous shortcuts % .. over and under \newcommand{\ss}[1]{_{\substack{#1}}} \newcommand{\ob}{\overbrace} \newcommand{\ub}{\underbrace} \newcommand{\ol}{\overline} \newcommand{\tld}{\widetilde} \newcommand{\HAT}{\widehat} \newcommand{\f}{\frac} \newcommand{\s}[2]{#1 /\mathopen{}#2} \newcommand{\rt}[1]{ {\sqrt{#1}}} % .. relations \newcommand{\sr}{\stackrel} \newcommand{\sse}{\subseteq} \newcommand{\ce}{\coloneqq} \newcommand{\ec}{\eqqcolon} \newcommand{\ap}{\approx} \newcommand{\ls}{\lesssim} \newcommand{\gs}{\gtrsim} % .. miscer \newcommand{\q}{\quad} \newcommand{\qq}{\qquad} \newcommand{\heart}{\heartsuit} % % Delimiters % (I needed to create my own because the MathJax version of \DeclarePairedDelimiter doesn't have \mathopen{} and that messes up the spacing) % .. one-part \newcommand{\p}[1]{\mathopen{}\left( #1 \right)} \newcommand{\b}[1]{\mathopen{}\left[ #1 \right]} \newcommand{\set}[1]{\mathopen{}\left\{ #1 \right\}} \newcommand{\abs}[1]{\mathopen{}\left\lvert #1 \right\rvert} \newcommand{\floor}[1]{\mathopen{}\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\mathopen{}\left\lceil #1 \right\rceil} \newcommand{\inner}[1]{\mathopen{}\left\langle #1 \right\rangle} % .... (use phantom to force at least the standard height of double bars) \newcommand{\norm}[1]{\mathopen{}\left\lVert #1 \vphantom{f} \right\rVert} \newcommand{\frob}[1]{\norm{#1}_\mathrm{F}} %% .. two-part \newcommand{\incond}[2]{#1 \mathop{}\middle|\mathop{} #2} \newcommand{\cond}[2]{ {\left.\incond{#1}{#2}\right.}} \newcommand{\pco}[2]{\p{\incond{#1}{#2}}} \newcommand{\bco}[2]{\b{\incond{#1}{#2}}} \newcommand{\setco}[2]{\set{\incond{#1}{#2}}} \newcommand{\at}[2]{ {\left.#1\right|_{#2}}} \newcommand{\pat}[2]{\p{\at{#1}{#2}}} \newcommand{\bat}[2]{\b{\at{#1}{#2}}} % ..... (use phantom to force at least the standard height of double bar) \newcommand{\oldpara}[2]{#1\vphantom{f} \mathop{}\middle\|\mathop{} #2} %\newcommand{\para}[2]{#1\vphantom{f} \mathop{}\middle\|\mathop{} #2} \newcommand{\para}[2]{\mathchoice{\begin{matrix}#1\\\hdashline#2\end{matrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}} \newcommand{\ppa}[2]{\p{\para{#1}{#2}}} \newcommand{\bpa}[2]{\b{\para{#1}{#2}}} %\newcommand{\bpaco}[4]{\bpa{\incond{#1}{#2}}{\incond{#3}{#4}}} \newcommand{\bpaco}[4]{\bpa{\cond{#1}{#2}}{\cond{#3}{#4}}} % % Levels of closeness \newcommand{\scirc}[1]{\sr{\circ}{#1}} \newcommand{\sdot}[1]{\sr{.}{#1}} \newcommand{\slog}[1]{\sr{\log}{#1}} \newcommand{\createClosenessLevels}[7]{ \newcommand{#2}{\mathrel{(#1)}} \newcommand{#3}{\mathrel{#1}} \newcommand{#4}{\mathrel{#1\!\!#1}} \newcommand{#5}{\mathrel{#1\!\!#1\!\!#1}} \newcommand{#6}{\mathrel{(\sdot{#1})}} \newcommand{#7}{\mathrel{(\slog{#1})}} } \let\lt\undefined \let\gt\undefined % .. vanilla versions (is it within a constant?) \newcommand{\ez}{\scirc=} \newcommand{\eq}{\simeq} \newcommand{\eqq}{\mathrel{\eq\!\!\eq}} \newcommand{\eqqq}{\mathrel{\eq\!\!\eq\!\!\eq}} \newcommand{\lez}{\scirc\le} \newcommand{\lq}{\preceq} \newcommand{\lqq}{\mathrel{\lq\!\!\lq}} \newcommand{\lqqq}{\mathrel{\lq\!\!\lq\!\!\lq}} \newcommand{\gez}{\scirc\ge} \newcommand{\gq}{\succeq} \newcommand{\gqq}{\mathrel{\gq\!\!\gq}} \newcommand{\gqqq}{\mathrel{\gq\!\!\gq\!\!\gq}} \newcommand{\lz}{\scirc<} \newcommand{\lt}{\prec} \newcommand{\ltt}{\mathrel{\lt\!\!\lt}} \newcommand{\lttt}{\mathrel{\lt\!\!\lt\!\!\lt}} \newcommand{\gz}{\scirc>} \newcommand{\gt}{\succ} \newcommand{\gtt}{\mathrel{\gt\!\!\gt}} \newcommand{\gttt}{\mathrel{\gt\!\!\gt\!\!\gt}} % .. dotted versions (is it equal in the limit?) \newcommand{\ed}{\sdot=} \newcommand{\eqd}{\sdot\eq} \newcommand{\eqqd}{\sdot\eqq} \newcommand{\eqqqd}{\sdot\eqqq} \newcommand{\led}{\sdot\le} \newcommand{\lqd}{\sdot\lq} \newcommand{\lqqd}{\sdot\lqq} \newcommand{\lqqqd}{\sdot\lqqq} \newcommand{\ged}{\sdot\ge} \newcommand{\gqd}{\sdot\gq} \newcommand{\gqqd}{\sdot\gqq} \newcommand{\gqqqd}{\sdot\gqqq} \newcommand{\ld}{\sdot<} \newcommand{\ltd}{\sdot\lt} \newcommand{\lttd}{\sdot\ltt} \newcommand{\ltttd}{\sdot\lttt} \newcommand{\gd}{\sdot>} \newcommand{\gtd}{\sdot\gt} \newcommand{\gttd}{\sdot\gtt} \newcommand{\gtttd}{\sdot\gttt} % .. log versions (is it equal up to log?) \newcommand{\elog}{\slog=} \newcommand{\eqlog}{\slog\eq} \newcommand{\eqqlog}{\slog\eqq} \newcommand{\eqqqlog}{\slog\eqqq} \newcommand{\lelog}{\slog\le} \newcommand{\lqlog}{\slog\lq} \newcommand{\lqqlog}{\slog\lqq} \newcommand{\lqqqlog}{\slog\lqqq} \newcommand{\gelog}{\slog\ge} \newcommand{\gqlog}{\slog\gq} \newcommand{\gqqlog}{\slog\gqq} \newcommand{\gqqqlog}{\slog\gqqq} \newcommand{\llog}{\slog<} \newcommand{\ltlog}{\slog\lt} \newcommand{\lttlog}{\slog\ltt} \newcommand{\ltttlog}{\slog\lttt} \newcommand{\glog}{\slog>} \newcommand{\gtlog}{\slog\gt} \newcommand{\gttlog}{\slog\gtt} \newcommand{\gtttlog}{\slog\gttt} % % Miscellaneous \newcommand{\LHS}{\mathrm{LHS}} \newcommand{\RHS}{\mathrm{RHS}} % .. operators \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\quasipoly}{quasipoly} \DeclareMathOperator{\negl}{negl} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % .. functions \DeclareMathOperator{\id}{id} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\err}{err} \DeclareMathOperator{\ReLU}{ReLU} % .. analysis \let\d\undefined \newcommand{\d}{\operatorname{d}\mathopen{}} \newcommand{\df}[2]{ {\f{\d #1}{\d #2}}} \newcommand{\ds}[2]{ {\s{\d #1}{\d #2}}} \newcommand{\part}{\partial} \newcommand{\partf}[2]{\f{\part #1}{\part #2}} \newcommand{\parts}[2]{\s{\part #1}{\part #2}} \newcommand{\grad}[1]{\mathop{\nabla\!_{#1}}} % .. sets of numbers \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} % %%% SPECIALIZED MATH %%% % % Logic \renewcommand{\and}{\wedge} \newcommand{\AND}{\bigwedge} \newcommand{\or}{\vee} \newcommand{\OR}{\bigvee} \newcommand{\xor}{\oplus} \newcommand{\XOR}{\bigoplus} \newcommand{\union}{\cup} \newcommand{\inter}{\cap} \newcommand{\UNION}{\bigcup} \newcommand{\INTER}{\bigcap} \newcommand{\comp}{\overline} \newcommand{\true}{\r{true}} \newcommand{\false}{\r{false}} \newcommand{\tf}{\set{\true,\false}} \DeclareMathOperator{\One}{\mathbb{1}} \DeclareMathOperator{\1}{\mathbb{1}} % % Linear algebra \renewcommand{\span}{\mathrm{span}} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\Img}{Im} \newcommand{\transp}{\mathsf{T}} \renewcommand{\t}{^\transp} % ... named tensors \newcommand{\namedtensorstrut}{\vphantom{fg}} % milder than \mathstrut \newcommand{\name}[1]{\mathsf{\namedtensorstrut #1}} \newcommand{\nbin}[2]{\mathbin{\underset{\substack{#1}}{\namedtensorstrut #2}}} \newcommand{\ndot}[1]{\nbin{#1}{\odot}} \newcommand{\ncat}[1]{\nbin{#1}{\oplus}} \newcommand{\nsum}[1]{\sum\limits_{\substack{#1}}} \newcommand{\nfun}[2]{\mathop{\underset{\substack{#1}}{\namedtensorstrut\mathrm{#2}}}} \newcommand{\ndef}[2]{\newcommand{#1}{\name{#2}}} \newcommand{\nt}[1]{^{\transp(#1)}} % % Probability \newcommand{\Normal}{\mathcal{N}} \let\Pr\undefined \DeclareMathOperator*{\Pr}{Pr} \DeclareMathOperator*{\G}{\mathbb{G}} \DeclareMathOperator*{\Odds}{Od} \DeclareMathOperator*{\E}{E} \DeclareMathOperator*{\Var}{Var} \DeclareMathOperator*{\Cov}{Cov} \DeclareMathOperator*{\corr}{corr} \DeclareMathOperator*{\median}{median} \newcommand{\dTV}{d_{\mathrm{TV}}} \newcommand{\dHel}{d_{\mathrm{Hel}}} \newcommand{\dJS}{d_{\mathrm{JS}}} % ... information theory \let\H\undefined \DeclareMathOperator*{\H}{H} \DeclareMathOperator*{\I}{I} \DeclareMathOperator*{\D}{D} % %%% SPECIALIZED COMPUTER SCIENCE %%% % % Complexity classes % .. classical \newcommand{\Poly}{\mathsf{P}} \newcommand{\NP}{\mathsf{NP}} \newcommand{\PH}{\mathsf{PH}} \newcommand{\PSPACE}{\mathsf{PSPACE}} \renewcommand{\L}{\mathsf{L}} % .. probabilistic \newcommand{\formost}{\mathsf{Я}} \newcommand{\RP}{\mathsf{RP}} \newcommand{\BPP}{\mathsf{BPP}} \newcommand{\MA}{\mathsf{MA}} \newcommand{\AM}{\mathsf{AM}} \newcommand{\IP}{\mathsf{IP}} \newcommand{\RL}{\mathsf{RL}} % .. circuits \newcommand{\NC}{\mathsf{NC}} \newcommand{\AC}{\mathsf{AC}} \newcommand{\ACC}{\mathsf{ACC}} \newcommand{\TC}{\mathsf{TC}} \newcommand{\Ppoly}{\mathsf{P}/\poly} \newcommand{\Lpoly}{\mathsf{L}/\poly} % .. resources \newcommand{\TIME}{\mathsf{TIME}} \newcommand{\SPACE}{\mathsf{SPACE}} \newcommand{\TISP}{\mathsf{TISP}} \newcommand{\SIZE}{\mathsf{SIZE}} % .. keywords \newcommand{\co}{\mathsf{co}} \newcommand{\Prom}{\mathsf{Promise}} % % Boolean analysis \newcommand{\zo}{\set{0,1}} \newcommand{\pmo}{\set{\pm 1}} \newcommand{\zpmo}{\set{0,\pm 1}} \newcommand{\harpoon}{\!\upharpoonright\!} \newcommand{\rr}[2]{#1\harpoon_{#2}} \newcommand{\Fou}[1]{\widehat{#1}} \DeclareMathOperator{\Ind}{\mathrm{Ind}} \DeclareMathOperator{\Inf}{\mathrm{Inf}} \newcommand{\Der}[1]{\operatorname{D}_{#1}\mathopen{}} \newcommand{\Exp}[1]{\operatorname{E}_{#1}\mathopen{}} \DeclareMathOperator{\Stab}{\mathrm{Stab}} \DeclareMathOperator{\T}{T} \DeclareMathOperator{\sens}{\mathrm{s}} \DeclareMathOperator{\bsens}{\mathrm{bs}} \DeclareMathOperator{\fbsens}{\mathrm{fbs}} \DeclareMathOperator{\Cert}{\mathrm{C}} \DeclareMathOperator{\DT}{\mathrm{DT}} \DeclareMathOperator{\CDT}{\mathrm{CDT}} % canonical \DeclareMathOperator{\ECDT}{\mathrm{ECDT}} \DeclareMathOperator{\CDTv}{\mathrm{CDT_{vars}}} \DeclareMathOperator{\ECDTv}{\mathrm{ECDT_{vars}}} \DeclareMathOperator{\CDTt}{\mathrm{CDT_{terms}}} \DeclareMathOperator{\ECDTt}{\mathrm{ECDT_{terms}}} \DeclareMathOperator{\CDTw}{\mathrm{CDT_{weighted}}} \DeclareMathOperator{\ECDTw}{\mathrm{ECDT_{weighted}}} \DeclareMathOperator{\AvgDT}{\mathrm{AvgDT}} \DeclareMathOperator{\PDT}{\mathrm{PDT}} % partial decision tree \DeclareMathOperator{\DTsize}{\mathrm{DT_{size}}} \DeclareMathOperator{\W}{\mathbf{W}} % .. functions (small caps sadly doesn't work) \DeclareMathOperator{\Par}{\mathrm{Par}} \DeclareMathOperator{\Maj}{\mathrm{Maj}} \DeclareMathOperator{\HW}{\mathrm{HW}} \DeclareMathOperator{\Thr}{\mathrm{Thr}} \DeclareMathOperator{\Tribes}{\mathrm{Tribes}} \DeclareMathOperator{\RotTribes}{\mathrm{RotTribes}} \DeclareMathOperator{\CycleRun}{\mathrm{CycleRun}} \DeclareMathOperator{\SAT}{\mathrm{SAT}} \DeclareMathOperator{\UniqueSAT}{\mathrm{UniqueSAT}} % % Dynamic optimality \newcommand{\OPT}{\mathsf{OPT}} \newcommand{\Alt}{\mathsf{Alt}} \newcommand{\Funnel}{\mathsf{Funnel}} % % Alignment \DeclareMathOperator{\Amp}{\mathrm{Amp}} % %%% TYPESETTING %%% % % In text \renewcommand{\th}{^{\mathrm{th}}} \newcommand{\degree}{^\circ} % % Fonts % .. bold \newcommand{\BA}{\boldsymbol{A}} \newcommand{\BB}{\boldsymbol{B}} \newcommand{\BC}{\boldsymbol{C}} \newcommand{\BD}{\boldsymbol{D}} \newcommand{\BE}{\boldsymbol{E}} \newcommand{\BF}{\boldsymbol{F}} \newcommand{\BG}{\boldsymbol{G}} \newcommand{\BH}{\boldsymbol{H}} \newcommand{\BI}{\boldsymbol{I}} \newcommand{\BJ}{\boldsymbol{J}} \newcommand{\BK}{\boldsymbol{K}} \newcommand{\BL}{\boldsymbol{L}} \newcommand{\BM}{\boldsymbol{M}} \newcommand{\BN}{\boldsymbol{N}} \newcommand{\BO}{\boldsymbol{O}} \newcommand{\BP}{\boldsymbol{P}} \newcommand{\BQ}{\boldsymbol{Q}} \newcommand{\BR}{\boldsymbol{R}} \newcommand{\BS}{\boldsymbol{S}} \newcommand{\BT}{\boldsymbol{T}} \newcommand{\BU}{\boldsymbol{U}} \newcommand{\BV}{\boldsymbol{V}} \newcommand{\BW}{\boldsymbol{W}} \newcommand{\BX}{\boldsymbol{X}} \newcommand{\BY}{\boldsymbol{Y}} \newcommand{\BZ}{\boldsymbol{Z}} \newcommand{\Ba}{\boldsymbol{a}} \newcommand{\Bb}{\boldsymbol{b}} \newcommand{\Bc}{\boldsymbol{c}} \newcommand{\Bd}{\boldsymbol{d}} \newcommand{\Be}{\boldsymbol{e}} \newcommand{\Bf}{\boldsymbol{f}} \newcommand{\Bg}{\boldsymbol{g}} \newcommand{\Bh}{\boldsymbol{h}} \newcommand{\Bi}{\boldsymbol{i}} \newcommand{\Bj}{\boldsymbol{j}} \newcommand{\Bk}{\boldsymbol{k}} \newcommand{\Bp}{\boldsymbol{p}} \newcommand{\Bq}{\boldsymbol{q}} \newcommand{\Br}{\boldsymbol{r}} \newcommand{\Bs}{\boldsymbol{s}} \newcommand{\Bt}{\boldsymbol{t}} \newcommand{\Bu}{\boldsymbol{u}} \newcommand{\Bv}{\boldsymbol{v}} \newcommand{\Bw}{\boldsymbol{w}} \newcommand{\Bx}{\boldsymbol{x}} \newcommand{\By}{\boldsymbol{y}} \newcommand{\Bz}{\boldsymbol{z}} \newcommand{\Balpha}{\boldsymbol{\alpha}} \newcommand{\Bbeta}{\boldsymbol{\beta}} \newcommand{\Bgamma}{\boldsymbol{\gamma}} \newcommand{\Bdelta}{\boldsymbol{\delta}} \newcommand{\Beps}{\boldsymbol{\eps}} \newcommand{\Bveps}{\boldsymbol{\veps}} \newcommand{\Bzeta}{\boldsymbol{\zeta}} \newcommand{\Beta}{\boldsymbol{\eta}} \newcommand{\Btheta}{\boldsymbol{\theta}} \newcommand{\Biota}{\boldsymbol{\iota}} \newcommand{\Bkappa}{\boldsymbol{\kappa}} \newcommand{\Blambda}{\boldsymbol{\lambda}} \newcommand{\Bmu}{\boldsymbol{\mu}} \newcommand{\Bnu}{\boldsymbol{\nu}} \newcommand{\Bxi}{\boldsymbol{\xi}} \newcommand{\Bomicron}{\boldsymbol{\omicron}} \newcommand{\Bpi}{\boldsymbol{\pi}} \newcommand{\Brho}{\boldsymbol{\rho}} \newcommand{\Bsigma}{\boldsymbol{\sigma}} \newcommand{\Btau}{\boldsymbol{\tau}} \newcommand{\Bupsilon}{\boldsymbol{\upsilon}} \newcommand{\Bphi}{\boldsymbol{\phi}} \newcommand{\Bfi}{\boldsymbol{\fi}} \newcommand{\Bchi}{\boldsymbol{\chi}} \newcommand{\Bpsi}{\boldsymbol{\psi}} \newcommand{\Bomega}{\boldsymbol{\omega}} % .. calligraphic \newcommand{\CA}{\mathcal{A}} \newcommand{\CB}{\mathcal{B}} \newcommand{\CC}{\mathcal{C}} \newcommand{\CD}{\mathcal{D}} \newcommand{\CE}{\mathcal{E}} \newcommand{\CF}{\mathcal{F}} \newcommand{\CG}{\mathcal{G}} \newcommand{\CH}{\mathcal{H}} \newcommand{\CI}{\mathcal{I}} \newcommand{\CJ}{\mathcal{J}} \newcommand{\CK}{\mathcal{K}} \newcommand{\CL}{\mathcal{L}} \newcommand{\CM}{\mathcal{M}} \newcommand{\CN}{\mathcal{N}} \newcommand{\CO}{\mathcal{O}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CQ}{\mathcal{Q}} \newcommand{\CR}{\mathcal{R}} \newcommand{\CS}{\mathcal{S}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CU}{\mathcal{U}} \newcommand{\CV}{\mathcal{V}} \newcommand{\CW}{\mathcal{W}} \newcommand{\CX}{\mathcal{X}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\CZ}{\mathcal{Z}} % .. typewriter \newcommand{\TA}{\mathtt{A}} \newcommand{\TB}{\mathtt{B}} \newcommand{\TC}{\mathtt{C}} \newcommand{\TD}{\mathtt{D}} \newcommand{\TE}{\mathtt{E}} \newcommand{\TF}{\mathtt{F}} \newcommand{\TG}{\mathtt{G}} \newcommand{\TH}{\mathtt{H}} \newcommand{\TI}{\mathtt{I}} \newcommand{\TJ}{\mathtt{J}} \newcommand{\TK}{\mathtt{K}} \newcommand{\TL}{\mathtt{L}} \newcommand{\TM}{\mathtt{M}} \newcommand{\TN}{\mathtt{N}} \newcommand{\TO}{\mathtt{O}} \newcommand{\TP}{\mathtt{P}} \newcommand{\TQ}{\mathtt{Q}} \newcommand{\TR}{\mathtt{R}} \newcommand{\TS}{\mathtt{S}} \newcommand{\TT}{\mathtt{T}} \newcommand{\TU}{\mathtt{U}} \newcommand{\TV}{\mathtt{V}} \newcommand{\TW}{\mathtt{W}} \newcommand{\TX}{\mathtt{X}} \newcommand{\TY}{\mathtt{Y}} \newcommand{\TZ}{\mathtt{Z}}$

If you’ve ever wondered:

  • Why is theoretical computer science primarily concerned with things that are polynomial in $n$, linear in $\log n$, and quasipolynomial (!) in $\exp(n)$?
  • What does the prefix “quasi” mean in “quasipolynomial”? What’s the difference between “near”-linear and “almost”-linear? How about “sub”exponential and “strongly sub”quadratic?
  • Can you summarize basically every asymptotic comparison into a consistent set of notations?

Then you’ve come to the right place!

Levels of magnitude

All the comparisons we’ll see involve thinking about a quantity $f$ at different levels of exponentiation: if $f$ itself is at level $0$, then

  • $\exp(f)$ is at level 1,
  • $\exp(\exp(f))$ is at level $2$,
  • $\log f$ is at level $-1$,
  • $\log \log f$ is at level $-2$,
  • etc.

Closeness levels

Let’s use these levels of magnitude to define different ways in which two quantities $f$ and $g$ can be considered equivalent.

Close

The simplest comparison is to just look at the difference of $f$ and $g$: let’s say that $f$ is “close” to $g$ if $|f-g| \le C$ for some constant $C$.

Since this is operating at level $0$, we’ll denote it with, uhh, zero $\simeq$ signs? Well that would be confusing so instead let’s just put the $\simeq$ in parentheses:

\[ \newcommand{\createCommands}[8]{ \newcommand{#2}{\mathrel{(#1)}} \newcommand{#3}{\mathrel{#1}} \newcommand{#4}{\mathrel{#1\!\!#1}} \newcommand{#5}{\mathrel{#1\!\!#1\!\!#1}} \newcommand{#6}{\mathrel{\ub{#1 \dotsb #1}_\mathit{k}}} \newcommand{#7}{\mathrel{(\sdot{#1})}} \newcommand{#8}{\mathrel{(\slog{#1})}} } \createCommands{\simeq}{\eqZero}{\eqOne}{\eqTwo}{\eqThree}{\eqK}{\eqZeroDot}{\eqZeroLog} \createCommands{\preceq}{\leZero}{\leOne}{\leTwo}{\leThree}{\leK}{\leZeroDot}{\leZeroLog} \createCommands{\succeq}{\geZero}{\geOne}{\geTwo}{\geThree}{\geK}{\geZeroDot}{\geZeroLog} \createCommands{\prec}{\ltZero}{\ltOne}{\ltTwo}{\ltThree}{\ltK}{\ltZeroDot}{\ltZeroLog} \createCommands{\succ}{\gtZero}{\gtOne}{\gtTwo}{\gtThree}{\gtK}{\gtZeroDot}{\gtZeroLog} f \eqZero g \iff \exists C:|f-g| \le C. \]

Linear

The next level is linearity! Indeed, $f$ is linear in $g$ if their logarithms are close:

\[ \al{ f = \Theta(g)\ &\iff \frac{f}{g} = \Theta(1)\\ &\iff \abs{\log f - \log g} \le O(1). } \]

Since the comparison is operating one level down (at level $-1$), let’s denote it with one $\simeq$ sign:

\[ f \eqOne g \iff \exists C: |\log f - \log g| \le C. \]

Polynomial

By going down one more logarithm, we get polynomial (also known as “power”) equivalence, which we denote with two $\simeq$ signs:

\[ \al{ f = \poly(g)\ &\iff \log f = \Theta(\log g)\\ &\iff \log f \eqOne \log g\\ &\iff f \eqTwo g. } \]

Quasipolynomial

The last level that’s commonly used is quasipolynomial equivalence, which is when the logarithm of $f$ is poly-logarithmic in $g$:

\[ \al{ f = \quasipoly(g)\ &\iff \log f = \polylog(g)\\ &\iff \log f \eqTwo \log g\\ &\iff f \eqThree g. } \]

Extra-close?

Could you go in the other direction, comparing $\exp(f)$ and $\exp(g)$? Well you could, but

\[ \abs{\exp(f)-\exp(g)} \le C \]

requires $f$ and $g$ to be very close indeed, and how close would actually depend on what base (e.g. $2$, $e$, $10$) you choose for the function $\exp(\cdot)$. So I have no idea what it could be useful for.

Defining “quasi”

We went from polynomial to quasipolynomial by comparing the logarithms instead of the quantities directly. This is the meaning of the prefix “quasi”: compare the logs!

This means in particular that:

  • “linear” is “quasi-close”,
  • “polynomial” is “quasi-linear”1 or “quasi-quasi-close”,
  • “quasipolynomial” is “quasi-quasi-quasi-close”.

You can keep sticking “quasi-“ in front to go down the magnitude levels indefinitely, and each of these represents a looser and looser comparison: in general, we can define

\[ f \eqK g %\iff \abs{\log^{(k)} 2^n - \log^{(k)} n} \le O(1) \iff \log^{(k)} f \eqZero \log^{(k)} g, \]

where $\log^{(k)}$ denotes the logarithm function iterated $k$ times.

But no matter how many quasi’s you add, $\exp(n)$ will never be quasi-…-quasi-close to $n$ as $n$ grows. Indeed,

\[ \exp(n) \eqK n %\iff \abs{\log^{(k)} 2^n - \log^{(k)} n} \le O(1) \iff \log^{(k)} \exp(n) \eqZero \log^{(k)} n, \]

but $\log^{(k)} \exp(n) = \log^{(k-1)}n$ is always much bigger than $\log^{(k)}n$ for large enough $n$.

Applications

Logarithmic, polynomial, exponential time

When we talk about an algorithm being logarithmic-time, polynomial-time or exponential-time, it means its running time $t$ has the following relations:

\[ t \eqOne \log n \qquad t \eqTwo n \qquad t \eqThree \exp(n). \]

In particular, “logarithmic” really means “linear in the logarithm”, and “exponential” really means “quasipolynomial in the exponentiation”.

One way to explain this is that they’re defined relative to each other: $t$ is polynomial iff $\log t$ is logarithmic, and $t$ is exponential iff $\log t$ is polynomial. But if we unpack the definition, it also reveals our strange collective obsession with being close to $\log \log n$:

\[ \al{ t \eqOne \log n\ &\iff \log t \eqZero \log \log n\\ t \eqTwo n\ &\iff \log \log t \eqZero \log \log n\\ t \eqThree \exp(n)\ &\iff \log \log \log t \eqZero \log \log n. } \]

Quadratic, cubic

When $f$ is “quadratic” or “cubic” in $g$, it means it’s linear in $g^2$ or $g^3$:

\[ f \eqOne g^2 \qquad f \eqOne g^3. \]

This makes sense, since this is the lowest level you can go before $f$ is just generically polynomial in $g$: as an example,

\[ f \eqTwo g^2 \iff \log \log f \eqZero \log \log g^2 = \log \log g + \log 2, \]

and an additive $\log 2$ is won’t affect whether $\log \log f$ and $\log \log g$ are close to each other.

Lower and upper bounds

So far, we’ve been concerned about approximate equalities. How about approximate lower and upper bounds? Do do this, just drop the absolute value. For example,

\[ \al{ f \le \poly(g)\ &\iff \log f \le O(\log g)\\ &\iff \log \log f - \log \log g \le O(1), } \]

and we will denote this as $f \leTwo g$. More generally, we define

\[ f\leK g \iff \exists C: \log^{(k)} f - \log^{(k)} g \le C \]

and symetrically, $f\geK g \iff g\leK f$. For example $f \geOne g \iff f \ge \Omega(g)$.2

Comparing small numbers

Sometimes, we want to compare how fast quantities decrease rather than grow: for example, for two quantities $\delta, \eps < 1$, it’s natural to express ideas like “$\delta$ is at worst polynomially small in $\eps$”.3 If we try to write this as

\[ \delta \geTwo \eps \iff \log \log \eps - \log \log \delta \le O(1), \]

then we run into a problem because both $\log \delta$ and $\log \eps$ are negative. One inelegant fix is to compare their inverses:

\[ 1/\delta \leTwo 1/\eps \iff \log\log (1/\delta) - \log \log(1/\eps) \le O(1). \]

But we can salvage the natural notation by adding a few rules whenever we take a logarithm:

  • if both values are positive, proceed normally;
  • if both values are negative, flip the signs and the direction of the inequality, then proceed normally;
  • otherwise (including if either of them is zero), stop and resolve the (in)equality immediately by direct comparison.

Applied to our case, this gives

\[ \al{ \delta \geTwo \eps\ &\iff \log \delta \geOne \log \eps\\ &\iff -\log \delta \leOne - \log \eps\tag{flip!}\\ &\iff \log (-\log \delta) - \log (-\log \eps) \le O(1), } \]

as desired.

This also allows us to express the (admittedly weird) idea that $-2n$ is linear in $-n$ (i.e. $-2n \eqOne -n$), but not linear in $0$ or $n$.

Modifiers

This basic approach (of comparing quantities at different levels) is enriched by many variants, which explain a lot of the mysterious prefixes.

“Sub” and “super”

What do the prefixes “sub” in subexponential and “super” in superlinear mean? Intuitively, for any class of functions $\CF$,

  • “sub-$\CF$” is all functions $f$ that are eventually smaller than any $f' \in \CF$;
  • “super-$\CF$” is all functions $f$ that are eventually larger than any $f' \in \CF$.

To see what that gives for the closeness levels we’ve defined, let’s look at these two examples more closely:

  • “exponential” means $f \eqThree \exp(g) \iff \abs{\log \log \log f -\log \log g} \le O(1)$, so “subexponential” must mean $\log \log \log f < \log \log g - \omega(1)$;
  • “linear” means $f \eqOne g \iff \abs{\log f - \log g} \le O(1)$, so “superlinear” must mean $\log f > \log g + \omega(1)$.

In general, the rule seems to be that instead of bounding the difference by a constant, “sub” and “super” force the difference to be superconstant. So let’s define

\[ f \ltK g \iff \lim\p{ \log^{(k)} g - \log^{(k)} f } = +\infty, \]

and symmetrically $f\gtK g\iff g\ltK f.$ For example, $f$ is subquadratic in $g$ if $f \ltOne g^2$ and superpolynomial in $g$ if $f \gtTwo g$.

When it’s ambiguous, we can specify the variables according to which the limit is taken. For example, assuming $f(n) >0$, we can write

\[ \al{ f(n) \ltOne_{n \to \infty} 1\ &\iff \lim_{n \to \infty} \p{\log 1 - \log f(n)} = +\infty\\ &\iff \lim_{n \to \infty} f(n) = 0, } \]

i.e. $f(n)$ is “subconstant”.

“Nearly”

There are two ways we can transform these notions of closeness and make them slightly looser. The first way is to ignore logarithmic factors. We say that $f$ is near-linear (or nearly linear) in $g$ if

\[ \al{ f \le \tld{O}(g)\ &\iff f \le g (\log g)^{O(1)}\\ &\iff \log f \le \log g + O(\log \log g). } \]

That is, “near” means that instead of bounding the difference by a constant, we bound it by something (even more) logarithmic, so we can extrapolate and define

\[ \al{ f \slog\eqK g\ &\iff \exists C: \abs{\log^{(k)}f - \log^{(k)}g} \le C\log^{(k+1)} g\\ f \slog\leK g\ &\iff\exists C: \log^{(k)}f - \log^{(k)}g \le C \log^{(k+1)} g } \]

and symmetrically $f\slog\geK g \iff g\slog\leK f$.4 For example,

\[ f = \tld\Theta(g) \iff f \slog\eqOne g \qq f \le \tld{O}(g) \iff g \ge \tld\Omega(f) \iff f \slog\leOne g, \]

and $f$ would be “near-polynomial” in $g$ if

\[ \al{ f \slog\leTwo g\ &\iff \log f \slog\leOne \log g\\ &\iff \log f \le \log g\ (\log \log g)^{O(1)}\\ &\iff f \le \exp\p{\log g\ (\log \log g)^{O(1)}}. } \]

For example, Mansour’s theorem shows that DNFs are near-polynomially sparse in their size (and near-exponentially in their width).

“Almost”

#to-write rename to

  • “weakly”?
    • in opposition to “strongly”: “not even weakly linear” $\iff$ “strongly superlinear”
  • “remotely”?
    • in opposition to “nearly”
    • “not even remotely linear” $\iff$ “strongly superlinear”
      • both connote a certain margin
    • because you have to go one level down
    • because you have to take a limit
  • “vaguely”? :P

The second (and more mysterious) way to make comparisons looser is the keyword “almost”. To understand it, let’s compare “linear” to “almost linear”. We say that $f$ is linear in $g$ if

\[ f \le O(g) \iff \log f - \log g \le O(1) \]

while almost linear5 denotes

\[ \al{ f < g^{1+o(1)}\ &\iff \log f < (1+o(1))\log g\\ &\iff \log \log f < \log \log g + o(1).\\ %&\iff \limsup(\log \log t - \log \log n) \le 0 } \]

That is, while “linear” allows the logarithms (level $-1$) to differ by a constant, “almost linear” compares the double logarithms (level $-2$) but requires a subconstant difference. This suggests that “almost” means “subconstant, one level down”.

In general, we can define stricter variants of $\simeq,\preceq$ with subconstant gaps:

\[ \al{ f \sdot\eqK g\ &\iff \lim\abs{\log^{(k)}f - \log^{(k)}g} = 0\\ f \sdot\leK g\ &\iff \limsup\p{\log^{(k)}f - \log^{(k)}g} \le 0 } \]

and symmetrically $f \sdot\geK g\ \iff g \sdot\leK f$. Note that the keyword “almost” increases the number of logarithms, and therefore the number of signs: for example, linearity is written as $t \leOne n$ but almost-linearity is written as $t \sdot\leTwo n$.

An interesting special case is that $f$ and $g$ are almost close if $f$ is asymptotic to $g$:

\[ \al{ f \sdot\eqOne g\ &\iff\abs{\log f - \log g} < o(1)\\ &\iff \lim(f/g) = 1\\ &\iff f \sim g. } \]

Another illustration comes from fine-grained complexity:

  • The exponential-time hypothesis (ETH) states that SAT requires time $t \ge \exp(\Omega(n))$. This translates to $t \geTwo \exp(n)$, i.e. $t$ is “poly-exponential”6 in $n$.
  • The strong7 exponential-time hypothesis (SETH) states that requires time $t \ge \exp(n-o(n))$, where the base of $\exp(\cdot)$ is $2$. This translates to the stronger condition $t \sdot\geTwo \exp(n)$ i.e. $t$ is “almost-linear-exponential” in $n$.
Original Notation Meaning Variant Notation Meaning
close $f \eqZero g$ $\abs{f-g} \le O(1)$ “almost-close” $f \sdot\eqOne g$ $f \sim g$ (asymptotic)
linear $f \leOne g$ $f \le O(g)$ almost-linear $f \sdot\leTwo g$ $f \le g^{1 + o(1)}$
quadratic $f \leOne g^2$ $f \le O(g^2)$ almost-quadratic $f \sdot\leTwo g^2$ $f \le g^{2 + o(1)}$
polynomial $f \leTwo g$ $f \le \poly(g)$ almost-polynomial $f \sdot\leThree g$ $f \le \exp\p{(\log f)^{1 + o(1)}}$

“Strongly sub” and “strongly super”

Strongly sub-$\CF$ is to almost-$\CF$ what sub-$\CF$ is to $\CF$. For example, a running time $t$ is strongly subquadratic if

\[ \al{ t \le n^{2 - \Omega(1)}\ &\iff \log t \le (1-\Omega(1))\log n^2\\ &\iff \log \log n^2 - \log\log t \ge \Omega(1). } \]

That is, “strongly sub” means “smaller by a constant, one level down”, and we can define the notation

\[ f \sdot\ltK g \iff \exists \eps>0: \log^{(k)}g - \log^{(k)}k \ge \eps \]

and symmetrically $f\sdot\gtK g\iff g\sdot\ltK f.$ For example, $f$ is strongly subcubic in $g$ if $f \sdot\ltTwo g^3$ and strongly superpolynomial in $g$ if

\[ \al{ f \sdot\gtThree g\ &\iff \log f \ge (\log g)^{1+\Omega(1)}\\ &\iff f \ge \exp\p{(\log g)^{1+\Omega(1)}}. } \]

Two interesting special cases:

  • $f \sdot\ltOne g \iff f \le (1-\Omega(1))g$, i.e. $f$ is a constant factor smaller than $g$;
  • $f \ltZeroDot g \iff f \le g - \Omega(1)$, i.e. $f$ is bounded away from $g$ (for example, you can write that a probability is bounded away from both $0$ and $1$ as $0\ltZeroDot p\ltZeroDot 1$).

Summary of notations

Here is a summary of the types of notations we’ve defined:

Keywords Gap Equality Inequality Strict inequality
“almost” vs “strongly sub/super” vanishing or not: $o(1)$ vs $\Omega(1)$ $\sdot\eqOne$ $\sdot\leOne$ $\sdot\ltOne$
(none) vs “sub/super” constant or not: $O(1)$ vs $\omega(1)$ $\eqOne$ $\leOne$ $\ltOne$
“near” logarithmic: $O(\log(\cdot))$ $\slog\eqOne$ $\slog\leOne$ (not defined)
  • The equalities, by decreasing strength:
    • notations:
      • $\eqZeroDot$ difference tends to $0$
      • $\eqZero$ “close” (bounded difference)
      • $\eqZeroLog$ “near-close”
      • $\sdot\eqOne$ asymptotic
      • $\eqOne$ linear
      • $\slog\eqOne$ near-linear
      • $\sdot\eqTwo$ almost linear
      • $\eqTwo$ polynomial
      • $\slog\eqTwo$ “near-polynomial”
      • $\sdot\eqThree$ “almost polynomial”
      • $\eqThree$ quasipolynomial
      • etc.
    • modifiers:
      • (none)
      • near
      • almost
      • quasi
  • The strict inequalities, by increasing strength:
    • notations:
      • $\gtZeroDot$ bounded away
      • $\gtZero$ difference tends toward infinity
      • $\sdot\gtOne$ constant factor larger
      • $\gtOne$ superlinear
      • $\sdot\gtTwo$ strongly superlinear
      • $\gtTwo$ superpolynomial
      • etc.
    • modifiers:
      • sub/super
      • strongly sub/super

#to-write replace $\eqd, \eq, \eqlog$ and $\lqd,\lq,\lqlog$ and $\ltd,\lt,\ltlog$ by

  • $\simeqq,\simeq,\stackrel{\sim}\approx$ and $\preceqq,\preceq,\precapprox$ and $\precneqq, \preceneq, \precnapprox$?
  • $\simeq, \approx, \approxeq$ and $\lesssim,\preceq, \precsim$ and $\lnsim, \prec, \precnsim$?
  • $\sim, \approx, \slog\approx$ and $\lesssim,\lessapprox, \slog\lessapprox$ and $\lnsim, \lnapprox, \slog\lnapprox$?
    • possibly most consistent
    • and doesn’t require differentiating between $<$ and $\prec$ in handwriting
    • but extremely clunky for the main set…
  • $\sim, \approx, \slog\approx$ and $\lesssim,\preceq, \slog\preceq$ and $\lnsim, \prec, \slog\prec$?
  1. This is unlike the (inconsistent, and terrible in my opinion) use of “quasilinear” to mean $O(n\, \polylog (n))$ (aka $\tld{O}(n)$). Thes complexity is also called “near-linear”, which seems like a much better term; see the Levels of closeness#”Near” section below. (To clarify, I call the traditional use of “quasi-“ inconsistent because it seems like given some expression $F(n)$ involving big-oh notation, such as $O(n)$ (linear) or $2^{O(\log n)}$ (polynomial), it multiplies the $O(\cdot)$ part by a fairly arbitrary $\polylog n$ factor, without caring about the magnitude of what was in the $O(\cdot)$ and whether it even involved $n$ at all. In contrast the definition of “quasi-“ presented here is defined just based on the expression itself, without any reference to what the “natural variable” $n$ ultimately is.) 

  2. Note how symmetric it is, and how convenient it is to not have to think about whether you should write $O(\cdot)$ or $\Omega(\cdot)$: instead turning $f \le O(g)$ into $g \ge \Omega(f)$, you just turn $f \leOne g$ into $g \geOne f$

  3. meaning that there should be some constant $c > 0$ such that $\delta \ge \Omega(\eps^c)$ 

  4. The definition of $f\slog\eqK g$ looks asymmetrical on the surface, but I don’t think this matters in the end because if $\log^{(k)}f$ and $\log^{(k)} g$ are going to have to be fairly close, so $\log^{(k+1)} f$ and $\log^{(k+1)} g$ should be basically the same thing. 

  5. I find it deeply annoying that “nearly-linear” is a stronger guarantee than “almost-linear”, despite the fact that the word “nearly” feels weaker than “almost”. As far as I can tell, this use of “almost-linear” first appeared in An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations by Kelner, Lee, Orecchia, and Sidford, so go blame them. :) 

  6. Recall that the usual notion of “exponential” is $t \ge 2^{n^{\Omega(1)}} \iff t \geThree \exp(n)$, i.e. quasipoly-exponential. 

  7. The word “strong” in SETH is unrelated to the notion of “strongly sub”/”strongly super” defined in this document.